add cesar
[libreriscv.git] / ztrans_proposal.mdwn
1 **OBSOLETE**, superceded by [[openpower/transcendentals]]
2
3 # Zftrans - transcendental operations
4
5 Summary:
6
7 *This proposal extends RISC-V scalar floating point operations to add IEEE754 transcendental functions (pow, log etc) and trigonometric functions (sin, cos etc). These functions are also 98% shared with the Khronos Group OpenCL Extended Instruction Set.*
8
9 Authors/Contributors:
10
11 * Luke Kenneth Casson Leighton
12 * Jacob Lifshay
13 * Dan Petroski
14 * Mitch Alsup
15 * Allen Baum
16 * Andrew Waterman
17 * Luis Vitorio Cargnini
18
19 [[!toc levels=2]]
20
21 See:
22
23 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
24 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
25 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
26 * [[rv_major_opcode_1010011]] for opcode listing.
27 * [[zfpacc_proposal]] for accuracy settings proposal
28
29 Extension subsets:
30
31 * **Zftrans**: standard transcendentals (best suited to 3D)
32 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
33 can be synthesised using Ztrans)
34 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
35 * **Ztrignpi**: trig non-xxx-pi sin cos tan
36 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
37 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
38 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
39 acosh, atanh (can be synthesised - see below)
40 * **ZftransAdv**: much more complex to implement in hardware
41 * **Zfrsqrt**: Reciprocal square-root.
42
43 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
44 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
45
46 Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
47
48 # TODO:
49
50 * Decision on accuracy, moved to [[zfpacc_proposal]]
51 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
52 * Errors **MUST** be repeatable.
53 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
54 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
55 Accuracy requirements for dual (triple) purpose implementations must
56 meet the higher standard.
57 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
58 it is desirable on its own by other implementors. This to be evaluated.
59
60 # Requirements <a name="requirements"></a>
61
62 This proposal is designed to meet a wide range of extremely diverse needs,
63 allowing implementors from all of them to benefit from the tools and hardware
64 cost reductions associated with common standards adoption in RISC-V (primarily IEEE754 and Vulkan).
65
66 **There are *four* different, disparate platform's needs (two new)**:
67
68 * 3D Embedded Platform (new)
69 * Embedded Platform
70 * 3D UNIX Platform (new)
71 * UNIX Platform
72
73 **The use-cases are**:
74
75 * 3D GPUs
76 * Numerical Computation
77 * (Potentially) A.I. / Machine-learning (1)
78
79 (1) although approximations suffice in this field, making it more likely
80 to use a custom extension. High-end ML would inherently definitely
81 be excluded.
82
83 **The power and die-area requirements vary from**:
84
85 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
86 * Mobile-Embedded (good performance with high efficiency for battery life)
87 * Desktop Computing
88 * Server / HPC (2)
89
90 (2) Supercomputing is left out of the requirements as it is traditionally
91 covered by Supercomputer Vectorisation Standards (such as RVV).
92
93 **The software requirements are**:
94
95 * Full public integration into GNU math libraries (libm)
96 * Full public integration into well-known Numerical Computation systems (numpy)
97 * Full public integration into upstream GNU and LLVM Compiler toolchains
98 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
99 seeking public Certification and Endorsement from the Khronos Group
100 under their Trademarked Certification Programme.
101
102 **The "contra"-requirements are**:
103
104 * NOT for use with RVV (RISC-V Vector Extension). These are *scalar* opcodes.
105 Ultra Low Power Embedded platforms (smart watches) are sufficiently
106 resource constrained that Vectorisation (of any kind) is likely to be
107 unnecessary and inappropriate.
108 * The requirements are **not** for the purposes of developing a full custom
109 proprietary GPU with proprietary firmware driven by *hardware* centric
110 optimised design decisions as a priority over collaboration.
111 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
112 this proposal however the fact that they typically develop proprietary
113 software that is not shared with the rest of the community likely to
114 use this proposal means that they have completely different needs.
115 * This proposal is for *sharing* of effort in reducing development costs
116
117 # Requirements Analysis <a name="requirements_analysis"></a>
118
119 **Platforms**:
120
121 3D Embedded will require significantly less accuracy and will need to make
122 power budget and die area compromises that other platforms (including Embedded)
123 will not need to make.
124
125 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
126 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
127 IEEE754 compliance is a hard requirement that would compromise power
128 and efficiency on a 3D UNIX Platform.
129
130 Even in the Embedded platform, IEEE754 interoperability is beneficial,
131 where if it was a hard requirement the 3D Embedded platform would be severely
132 compromised in its ability to meet the demanding power budgets of that market.
133
134 Thus, learning from the lessons of
135 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
136 this proposal works in conjunction with the [[zfpacc_proposal]], so as
137 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
138
139 **Use-cases**:
140
141 There really is little else in the way of suitable markets. 3D GPUs
142 have extremely competitive power-efficiency and power-budget requirements
143 that are completely at odds with the other market at the other end of
144 the spectrum: Numerical Computation.
145
146 Interoperability in Numerical Computation is absolutely critical: it
147 implies (correlates directly with) IEEE754 compliance. However full
148 IEEE754 compliance automatically and inherently penalises a GPU on
149 performance and die area, where accuracy is simply just not necessary.
150
151 To meet the needs of both markets, the two new platforms have to be created,
152 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
153 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
154 compliance *and* 3D performance in a single ASIC.
155
156 **Power and die-area requirements**:
157
158 This is where the conflicts really start to hit home.
159
160 A "Numerical High performance only" proposal (suitable for Server / HPC
161 only) would customise and target the Extension based on a quantitative
162 analysis of the value of certain opcodes *for HPC only*. It would
163 conclude, reasonably and rationally, that it is worthwhile adding opcodes
164 to RVV as parallel Vector operations, and that further discussion of
165 the matter is pointless.
166
167 A "Proprietary GPU effort" (even one that was intended for publication
168 of its API through, for example, a public libre-licensed Vulkan SPIR-V
169 Compiler) would conclude, reasonably and rationally, that, likewise, the
170 opcodes were best suited to be added to RVV, and, further, that their
171 requirements conflict with the HPC world, due to the reduced accuracy.
172 This on the basis that the silicon die area required for IEEE754 is far
173 greater than that needed for reduced-accuracy, and thus their product
174 would be completely unacceptable in the market if it had to meet IEEE754,
175 unnecessarily.
176
177 An "Embedded 3D" GPU has radically different performance, power
178 and die-area requirements (and may even target SoftCores in FPGA).
179 Sharing of the silicon to cover multi-function uses (CORDIC for example)
180 is absolutely essential in order to keep cost and power down, and high
181 performance simply is not. Multi-cycle FSMs instead of pipelines may
182 be considered acceptable, and so on. Subsets of functionality are
183 also essential.
184
185 An "Embedded Numerical" platform has requirements that are separate and
186 distinct from all of the above!
187
188 Mobile Computing needs (tablets, smartphones) again pull in a different
189 direction: high performance, reasonable accuracy, but efficiency is
190 critical. Screen sizes are not at the 4K range: they are within the
191 800x600 range at the low end (320x240 at the extreme budget end), and
192 only the high-performance smartphones and tablets provide 1080p (1920x1080).
193 With lower resolution, accuracy compromises are possible which the Desktop
194 market (4k and soon to be above) would find unacceptable.
195
196 Meeting these disparate markets may be achieved, again, through
197 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
198 to that, subdividing the extension into subsets that best suit the different
199 market areas.
200
201 **Software requirements**:
202
203 A "custom" extension is developed in near-complete isolation from the
204 rest of the RISC-V Community. Cost savings to the Corporation are
205 large, with no direct beneficial feedback to (or impact on) the rest
206 of the RISC-V ecosystem.
207
208 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
209 OpenCL - users have much more influence than first appears. Compliance
210 with these standards is critical as the userbase (Games writers,
211 scientific applications) expects not to have to rewrite extremely large
212 and costly codebases to conform with *non-standards-compliant* hardware.
213
214 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX)
215 is paramount, and compliance with Trademarked Standards is critical.
216 Any deviation from Trademarked Standards means that an implementation
217 may not be sold and also make a claim of being, for example, "Vulkan
218 compatible".
219
220 For 3D, this in turn reinforces and makes a hard requirement a need for public
221 compliance with such standards, over-and-above what would otherwise be
222 set by a RISC-V Standards Development Process, including both the
223 software compliance and the knock-on implications that has for hardware.
224
225 For libraries such as libm and numpy, accuracy is paramount, for software interoperability across multiple platforms. Some algorithms critically rely on correct IEEE754, for example.
226 The conflicting accuracy requirements can be met through the zfpacc extension.
227
228 **Collaboration**:
229
230 The case for collaboration on any Extension is already well-known.
231 In this particular case, the precedent for inclusion of Transcendentals
232 in other ISAs, both from Graphics and High-performance Computing, has
233 these primitives well-established in high-profile software libraries and
234 compilers in both GPU and HPC Computer Science divisions. Collaboration
235 and shared public compliance with those standards brooks no argument.
236
237 The combined requirements of collaboration and multi accuracy requirements
238 mean that *overall this proposal is categorically and wholly unsuited
239 to relegation of "custom" status*.
240
241 # Quantitative Analysis <a name="analysis"></a>
242
243 This is extremely challenging. Normally, an Extension would require full,
244 comprehensive and detailed analysis of every single instruction, for every
245 single possible use-case, in every single market. The amount of silicon
246 area required would be balanced against the benefits of introducing extra
247 opcodes, as well as a full market analysis performed to see which divisions
248 of Computer Science benefit from the introduction of the instruction,
249 in each and every case.
250
251 With 34 instructions, four possible Platforms, and sub-categories of
252 implementations even within each Platform, over 136 separate and distinct
253 analyses is not a practical proposition.
254
255 A little more intelligence has to be applied to the problem space,
256 to reduce it down to manageable levels.
257
258 Fortunately, the subdivision by Platform, in combination with the
259 identification of only two primary markets (Numerical Computation and
260 3D), means that the logical reasoning applies *uniformly* and broadly
261 across *groups* of instructions rather than individually, making it a primarily
262 hardware-centric and accuracy-centric decision-making process.
263
264 In addition, hardware algorithms such as CORDIC can cover such a wide
265 range of operations (simply by changing the input parameters) that the
266 normal argument of compromising and excluding certain opcodes because they
267 would significantly increase the silicon area is knocked down.
268
269 However, CORDIC, whilst space-efficient, and thus well-suited to
270 Embedded, is an old iterative algorithm not well-suited to High-Performance
271 Computing or Mid to High-end GPUs, where commercially-competitive
272 FP32 pipeline lengths are only around 5 stages.
273
274 Not only that, but some operations such as LOG1P, which would normally
275 be excluded from one market (due to there being an alternative macro-op
276 fused sequence replacing it) are required for other markets due to
277 the higher accuracy obtainable at the lower range of input values when
278 compared to LOG(1+P).
279
280 (Thus we start to see why "proprietary" markets are excluded from this
281 proposal, because "proprietary" markets would make *hardware*-driven
282 optimisation decisions that would be completely inappropriate for a
283 common standard).
284
285 ATAN and ATAN2 is another example area in which one market's needs
286 conflict directly with another: the only viable solution, without compromising
287 one market to the detriment of the other, is to provide both opcodes
288 and let implementors make the call as to which (or both) to optimise,
289 at the *hardware* level.
290
291 Likewise it is well-known that loops involving "0 to 2 times pi", often
292 done in subdivisions of powers of two, are costly to do because they
293 involve floating-point multiplication by PI in each and every loop.
294 3D GPUs solved this by providing SINPI variants which range from 0 to 1
295 and perform the multiply *inside* the hardware itself. In the case of
296 CORDIC, it turns out that the multiply by PI is not even needed (is a
297 loop invariant magic constant).
298
299 However, some markets may not wish to *use* CORDIC, for reasons mentioned
300 above, and, again, one market would be penalised if SINPI was prioritised
301 over SIN, or vice-versa.
302
303 In essence, then, even when only the two primary markets (3D and
304 Numerical Computation) have been identified, this still leaves two
305 (three) diametrically-opposed *accuracy* sub-markets as the prime
306 conflict drivers:
307
308 * Embedded Ultra Low Power
309 * IEEE754 compliance
310 * Khronos Vulkan compliance
311
312 Thus the best that can be done is to use Quantitative Analysis to work
313 out which "subsets" - sub-Extensions - to include, provide an additional
314 "accuracy" extension, be as "inclusive" as possible, and thus allow
315 implementors to decide what to add to their implementation, and how best
316 to optimise them.
317
318 This approach *only* works due to the uniformity of the function space,
319 and is **not** an appropriate methodology for use in other Extensions
320 with huge (non-uniform) market diversity even with similarly large
321 numbers of potential opcodes. BitManip is the perfect counter-example.
322
323 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
324
325 This list shows the (direct) equivalence between proposed opcodes,
326 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
327 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
328 are present in the Khronos Extended Instruction Set.
329
330 For RISCV opcode encodings see
331 [[rv_major_opcode_1010011]]
332
333 See
334 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
335 and <https://ieeexplore.ieee.org/document/8766229>
336
337 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
338 use of the "fmt" field that is already present in the RISC-V Specification.
339 * "Native" opcodes are *not* being proposed: implementors will be expected
340 to use the (equivalent) proposed opcode covering the same function.
341 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
342 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
343 vectors (or can be done as scalar operations using other RISC-V instructions).
344
345 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
346 Deviation from conformance with the Khronos Specification - including the
347 Khronos Specification accuracy requirements - is not an option, as it
348 results in non-compliance, and the vendor may not use the Trademarked words
349 "Vulkan" etc. in conjunction with their product.
350
351 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
352 Interestingly the only functions missing when compared to OpenCL are
353 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
354
355 [[!table data="""
356 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 |
357 FSIN | sin | half\_sin | native\_sin | NONE | sin |
358 FCOS | cos | half\_cos | native\_cos | NONE | cos |
359 FTAN | tan | half\_tan | native\_tan | NONE | tan |
360 NONE (1) | sincos | NONE | NONE | NONE | NONE |
361 FASIN | asin | NONE | NONE | NONE | asin |
362 FACOS | acos | NONE | NONE | NONE | acos |
363 FATAN | atan | NONE | NONE | NONE | atan |
364 FSINPI | sinpi | NONE | NONE | NONE | sinPi |
365 FCOSPI | cospi | NONE | NONE | NONE | cosPi |
366 FTANPI | tanpi | NONE | NONE | NONE | tanPi |
367 FASINPI | asinpi | NONE | NONE | NONE | asinPi |
368 FACOSPI | acospi | NONE | NONE | NONE | acosPi |
369 FATANPI | atanpi | NONE | NONE | NONE | atanPi |
370 FSINH | sinh | NONE | NONE | NONE | sinh |
371 FCOSH | cosh | NONE | NONE | NONE | cosh |
372 FTANH | tanh | NONE | NONE | NONE | tanh |
373 FASINH | asinh | NONE | NONE | NONE | asinh |
374 FACOSH | acosh | NONE | NONE | NONE | acosh |
375 FATANH | atanh | NONE | NONE | NONE | atanh |
376 FATAN2 | atan2 | NONE | NONE | NONE | atan2 |
377 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi |
378 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt |
379 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) |
380 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 |
381 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 |
382 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 |
383 FLOG1P | log1p | NONE | NONE | NONE | logp1 |
384 FEXP | exp | half\_exp | native\_exp | NONE | exp |
385 FLOG | log | half\_log | native\_log | NONE | log |
386 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 |
387 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 |
388 FPOW | pow | NONE | NONE | NONE | pow |
389 FPOWN | pown | NONE | NONE | NONE | pown |
390 FPOWR | powr | half\_powr | native\_powr | NONE | powr |
391 FROOTN | rootn | NONE | NONE | NONE | rootn |
392 FHYPOT | hypot | NONE | NONE | NONE | hypot |
393 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) |
394 NONE | NONE | NONE | NONE | NONE | compound |
395 NONE | NONE | NONE | NONE | NONE | exp2m1 |
396 NONE | NONE | NONE | NONE | NONE | exp10m1 |
397 NONE | NONE | NONE | NONE | NONE | log2p1 |
398 NONE | NONE | NONE | NONE | NONE | log10p1 |
399 """]]
400
401 Note (1) FSINCOS is macro-op fused (see below).
402
403 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
404
405 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
406
407 ## List of 2-arg opcodes
408
409 [[!table data="""
410 opcode | Description | pseudocode | Extension |
411 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
412 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
413 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
414 FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
415 FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
416 FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
417 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
418 """]]
419
420 ## List of 1-arg transcendental opcodes
421
422 [[!table data="""
423 opcode | Description | pseudocode | Extension |
424 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
425 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
426 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
427 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
428 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
429 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
430 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
431 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
432 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
433 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
434 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
435 """]]
436
437 ## List of 1-arg trigonometric opcodes
438
439 [[!table data="""
440 opcode | Description | pseudo-code | Extension |
441 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
442 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
443 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
444 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
445 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
446 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
447 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
448 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
449 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
450 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
451 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
452 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
453 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
454 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
455 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
456 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
457 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
458 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
459 """]]
460
461 # Subsets
462
463 The full set is based on the Khronos OpenCL opcodes. If implemented
464 entirely it would be too much for both Embedded and also 3D.
465
466 The subsets are organised by hardware complexity, need (3D, HPC), however
467 due to synthesis producing inaccurate results at the range limits,
468 the less common subsets are still required for IEEE754 HPC.
469
470 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
471 following opcodes:
472
473 E8 - fatan_pt2
474 F0 - frcp (reciprocal)
475 F2 - frsqrt (inverse square root, 1/sqrt(x))
476 F3 - fsqrt (square root)
477 F4 - fexp2 (2^x)
478 F5 - flog2
479 F6 - fsin1pi
480 F7 - fcos1pi
481 F9 - fatan_pt1
482
483 These in FP32 and FP16 only: no FP32 hardware, at all.
484
485 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
486
487 sin, cos2pi
488 cos, sin2pi
489 log2, exp
490 sqrt and rsqrt
491 recip.
492
493 It also has fast variants of some of these, as a CSR Mode.
494
495 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
496 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
497
498 COS2PI (appx)
499 EXP2
500 LOG (IEEE754)
501 RECIP
502 RSQRT
503 SQRT
504 SIN2PI (appx)
505
506 AMD RDNA has F16 and F32 variants of all the above, and also has F64
507 variants of SQRT, RSQRT and RECIP. It is interesting that even the
508 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
509 does.
510
511 Also a general point, that customised optimised hardware targetting
512 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
513 for FP64 (except as a starting point for hardware or software driven
514 Newton Raphson or other iterative method).
515
516 Also in cost/area sensitive applications even the extra ROM lookup tables
517 for certain algorithms may be too costly.
518
519 These wildly differing and incompatible driving factors lead to the
520 subset subdivisions, below.
521
522 ## Transcendental Subsets
523
524 ### Zftrans
525
526 LOG2 EXP2 RECIP RSQRT
527
528 Zftrans contains the minimum standard transcendentals best suited to
529 3D. They are also the minimum subset for synthesising log10, exp10,
530 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
531
532 They are therefore considered "base" (essential) transcendentals.
533
534 ### ZftransExt
535
536 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
537
538 These are extra transcendental functions that are useful, not generally
539 needed for 3D, however for Numerical Computation they may be useful.
540
541 Although they can be synthesised using Ztrans (LOG2 multiplied
542 by a constant), there is both a performance penalty as well as an
543 accuracy penalty towards the limits, which for IEEE754 compliance is
544 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
545 accuracy at the lower end (very small rs1) than LOG(rs1).
546
547 Their forced inclusion would be inappropriate as it would penalise
548 embedded systems with tight power and area budgets. However if they
549 were completely excluded the HPC applications would be penalised on
550 performance and accuracy.
551
552 Therefore they are their own subset extension.
553
554 ### Zfhyp
555
556 SINH, COSH, TANH, ASINH, ACOSH, ATANH
557
558 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D is limited.
559
560 They can all be synthesised using LOG, SQRT and so on, so depend
561 on Zftrans. However, once again, at the limits of the range, IEEE754
562 compliance becomes impossible, and thus a hardware implementation may
563 be required.
564
565 HPC and high-end GPUs are likely markets for these.
566
567 ### ZftransAdv
568
569 CBRT, POW, POWN, POWR, ROOTN
570
571 These are simply much more complex to implement in hardware, and typically
572 will only be put into HPC applications.
573
574 * **Zfrsqrt**: Reciprocal square-root.
575
576 ## Trigonometric subsets
577
578 ### Ztrigpi vs Ztrignpi
579
580 * **Ztrigpi**: SINPI COSPI TANPI
581 * **Ztrignpi**: SIN COS TAN
582
583 Ztrignpi are the basic trigonometric functions through which all others
584 could be synthesised, and they are typically the base trigonometrics
585 provided by GPUs for 3D, warranting their own subset.
586
587 In the case of the Ztrigpi subset, these are commonly used in for loops
588 with a power of two number of subdivisions, and the cost of multiplying
589 by PI inside each loop (or cumulative addition, resulting in cumulative
590 errors) is not acceptable.
591
592 In for example CORDIC the multiplication by PI may be moved outside of
593 the hardware algorithm as a loop invariant, with no power or area penalty.
594
595 Again, therefore, if SINPI (etc.) were excluded, programmers would be penalised by being forced to divide by PI in some circumstances. Likewise if SIN were excluded, programmers would be penaslised by being forced to *multiply* by PI in some circumstances.
596
597 Thus again, a slightly different application of the same general argument applies to give Ztrignpi and
598 Ztrigpi as subsets. 3D GPUs will almost certainly provide both.
599
600 ### Zarctrigpi and Zarctrignpi
601
602 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
603 * **Zarctrignpi**: ATAN2 ACOS ASIN
604
605 These are extra trigonometric functions that are useful in some
606 applications, but even for 3D GPUs, particularly embedded and mobile class
607 GPUs, they are not so common and so are typically synthesised, there.
608
609 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
610 once again, both a performance penalty as well as an accuracy penalty
611 towards the limits, which for IEEE754 compliance is unacceptable, yet
612 is acceptable for 3D.
613
614 Therefore they are their own subset extensions.
615
616 # Synthesis, Pseudo-code ops and macro-ops
617
618 The pseudo-ops are best left up to the compiler rather than being actual
619 pseudo-ops, by allocating one scalar FP register for use as a constant
620 (loop invariant) set to "1.0" at the beginning of a function or other
621 suitable code block.
622
623 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
624 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
625
626 FATANPI example pseudo-code:
627
628 lui t0, 0x3F800 // upper bits of f32 1.0
629 fmv.x.s ft0, t0
630 fatan2pi.s rd, rs1, ft0
631
632 Hyperbolic function example (obviates need for Zfhyp except for
633 high-performance or correctly-rounding):
634
635 ASINH( x ) = ln( x + SQRT(x**2+1))
636
637 # Evaluation and commentary
638
639 This section will move later to discussion.
640
641 ## Reciprocal
642
643 Used to be an alias. Some implementors may wish to implement divide as
644 y times recip(x).
645
646 Others may have shared hardware for recip and divide, others may not.
647
648 To avoid penalising one implementor over another, recip stays.
649
650 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
651
652 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
653 Research needed to ensure that implementors are not compromised by such
654 a decision
655 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
656
657 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
658 > > Likewise for EXP and EXPM1
659
660 > ok, they stay in as real opcodes, then.
661
662 ## ATAN / ATAN2 commentary
663
664 Discussion starts here:
665 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
666
667 from Mitch Alsup:
668
669 would like to point out that the general implementations of ATAN2 do a
670 bunch of special case checks and then simply call ATAN.
671
672 double ATAN2( double y, double x )
673 { // IEEE 754-2008 quality ATAN2
674
675 // deal with NANs
676 if( ISNAN( x ) ) return x;
677 if( ISNAN( y ) ) return y;
678
679 // deal with infinities
680 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
681 if( x == +∞ ) return copysign( 0.0, y );
682 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
683 if( x == -∞ ) return copysign( π, y );
684 if( |y|== +∞ ) return copysign( π/2, y );
685
686 // deal with signed zeros
687 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
688 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
689 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
690
691 // calculate ATAN2 textbook style
692 if( x > 0.0 ) return ATAN( |y / x| );
693 if( x < 0.0 ) return π - ATAN( |y / x| );
694 }
695
696
697 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
698 a constant and then call/use ATAN2.
699
700 When one considers an implementation of ATAN, one must consider several
701 ranges of evaluation::
702
703 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
704 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
705 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
706
707 I should point out that the add/sub of π/2 can not lose significance
708 since the result of ATAN(1/x) is bounded 0..π/2
709
710 The bottom line is that I think you are choosing to make too many of
711 these into OpCodes, making the hardware function/calculation unit (and
712 sequencer) more complicated that necessary.
713
714 --------------------------------------------------------
715
716 We therefore I think have a case for bringing back ATAN and including ATAN2.
717
718 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
719
720 (that is the hypothesis, to be evaluated for correctness. feedback requested).
721
722 This because we cannot compromise or prioritise one platfrom's
723 speed/accuracy over another. That is not reasonable or desirable, to
724 penalise one implementor over another.
725
726 Thus, all implementors, to keep interoperability, must both have both
727 opcodes and may choose, at the architectural and routing level, which
728 one to implement in terms of the other.
729
730 Allowing implementors to choose to add either opcode and let traps sort it
731 out leaves an uncertainty in the software developer's mind: they cannot
732 trust the hardware, available from many vendors, to be performant right
733 across the board.
734
735 Standards are a pig.
736
737 ---
738
739 I might suggest that if there were a way for a calculation to be performed
740 and the result of that calculation chained to a subsequent calculation
741 such that the precision of the result-becomes-operand is wider than
742 what will fit in a register, then you can dramatically reduce the count
743 of instructions in this category while retaining
744
745 acceptable accuracy:
746
747 z = x / y
748
749 can be calculated as::
750
751 z = x * (1/y)
752
753 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
754 accurate, but GPUs want speed and
755
756 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
757 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
758
759 Given that one has the ability to carry (and process) more fraction bits,
760 one can then do high precision multiplies of π or other transcendental
761 radixes.
762
763 And GPUs have been doing this almost since the dawn of 3D.
764
765 // calculate ATAN2 high performance style
766 // Note: at this point x != y
767 //
768 if( x > 0.0 )
769 {
770 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
771 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
772 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
773 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
774 }
775 if( x < 0.0 )
776 {
777 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
778 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
779 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
780 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
781 }
782
783 This way the adds and subtracts from the constant are not in a precision
784 precarious position.