(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 Summary:
4
5 *This proposal extends RISC-V scalar floating point operations to add IEEE754 transcendental functions (pow, log etc) and trigonometric functions (sin, cos etc). These functions are also 98% shared with the Khronos Group OpenCL Extended Instruction Set.*
6
7 With thanks to:
8
9 * Jacob Lifshay
10 * Dan Petroski
11 * Mitch Alsup
12 * Allen Baum
13 * Andrew Waterman
14 * Luis Vitorio Cargnini
15
16 [[!toc levels=2]]
17
18 See:
19
20 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
21 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
22 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
23 * [[rv_major_opcode_1010011]] for opcode listing.
24 * [[zfpacc_proposal]] for accuracy settings proposal
25
26 Extension subsets:
27
28 * **Zftrans**: standard transcendentals (best suited to 3D)
29 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
30 can be synthesised using Ztrans)
31 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
32 * **Ztrignpi**: trig non-xxx-pi sin cos tan
33 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
34 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
35 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
36 acosh, atanh (can be synthesised - see below)
37 * **ZftransAdv**: much more complex to implement in hardware
38 * **Zfrsqrt**: Reciprocal square-root.
39
40 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
41 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
42
43 Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
44
45 # TODO:
46
47 * Decision on accuracy, moved to [[zfpacc_proposal]]
48 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
49 * Errors **MUST** be repeatable.
50 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
51 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
52 Accuracy requirements for dual (triple) purpose implementations must
53 meet the higher standard.
54 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
55 it is desirable on its own by other implementors. This to be evaluated.
56
57 # Requirements <a name="requirements"></a>
58
59 This proposal is designed to meet a wide range of extremely diverse needs,
60 allowing implementors from all of them to benefit from the tools and hardware
61 cost reductions associated with common standards adoption in RISC-V (primarily IEEE754 and Vulkan).
62
63 **There are *four* different, disparate platform's needs (two new)**:
64
65 * 3D Embedded Platform (new)
66 * Embedded Platform
67 * 3D UNIX Platform (new)
68 * UNIX Platform
69
70 **The use-cases are**:
71
72 * 3D GPUs
73 * Numerical Computation
74 * (Potentially) A.I. / Machine-learning (1)
75
76 (1) although approximations suffice in this field, making it more likely
77 to use a custom extension. High-end ML would inherently definitely
78 be excluded.
79
80 **The power and die-area requirements vary from**:
81
82 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
83 * Mobile-Embedded (good performance with high efficiency for battery life)
84 * Desktop Computing
85 * Server / HPC (2)
86
87 (2) Supercomputing is left out of the requirements as it is traditionally
88 covered by Supercomputer Vectorisation Standards (such as RVV).
89
90 **The software requirements are**:
91
92 * Full public integration into GNU math libraries (libm)
93 * Full public integration into well-known Numerical Computation systems (numpy)
94 * Full public integration into upstream GNU and LLVM Compiler toolchains
95 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
96 seeking public Certification and Endorsement from the Khronos Group
97 under their Trademarked Certification Programme.
98
99 **The "contra"-requirements are**:
100
101 * NOT for use with RVV (RISC-V Vector Extension). These are *scalar* opcodes.
102 Ultra Low Power Embedded platforms (smart watches) are sufficiently
103 resource constrained that Vectorisation (of any kind) is likely to be
104 unnecessary and inappropriate.
105 * The requirements are **not** for the purposes of developing a full custom
106 proprietary GPU with proprietary firmware driven by *hardware* centric
107 optimised design decisions as a priority over collaboration.
108 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
109 this proposal however the fact that they typically develop proprietary
110 software that is not shared with the rest of the community likely to
111 use this proposal means that they have completely different needs.
112 * This proposal is for *sharing* of effort in reducing development costs
113
114 # Requirements Analysis <a name="requirements_analysis"></a>
115
116 **Platforms**:
117
118 3D Embedded will require significantly less accuracy and will need to make
119 power budget and die area compromises that other platforms (including Embedded)
120 will not need to make.
121
122 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
123 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
124 IEEE754 compliance is a hard requirement that would compromise power
125 and efficiency on a 3D UNIX Platform.
126
127 Even in the Embedded platform, IEEE754 interoperability is beneficial,
128 where if it was a hard requirement the 3D Embedded platform would be severely
129 compromised in its ability to meet the demanding power budgets of that market.
130
131 Thus, learning from the lessons of
132 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
133 this proposal works in conjunction with the [[zfpacc_proposal]], so as
134 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
135
136 **Use-cases**:
137
138 There really is little else in the way of suitable markets. 3D GPUs
139 have extremely competitive power-efficiency and power-budget requirements
140 that are completely at odds with the other market at the other end of
141 the spectrum: Numerical Computation.
142
143 Interoperability in Numerical Computation is absolutely critical: it
144 implies (correlates directly with) IEEE754 compliance. However full
145 IEEE754 compliance automatically and inherently penalises a GPU on
146 performance and die area, where accuracy is simply just not necessary.
147
148 To meet the needs of both markets, the two new platforms have to be created,
149 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
150 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
151 compliance *and* 3D performance in a single ASIC.
152
153 **Power and die-area requirements**:
154
155 This is where the conflicts really start to hit home.
156
157 A "Numerical High performance only" proposal (suitable for Server / HPC
158 only) would customise and target the Extension based on a quantitative
159 analysis of the value of certain opcodes *for HPC only*. It would
160 conclude, reasonably and rationally, that it is worthwhile adding opcodes
161 to RVV as parallel Vector operations, and that further discussion of
162 the matter is pointless.
163
164 A "Proprietary GPU effort" (even one that was intended for publication
165 of its API through, for example, a public libre-licensed Vulkan SPIR-V
166 Compiler) would conclude, reasonably and rationally, that, likewise, the
167 opcodes were best suited to be added to RVV, and, further, that their
168 requirements conflict with the HPC world, due to the reduced accuracy.
169 This on the basis that the silicon die area required for IEEE754 is far
170 greater than that needed for reduced-accuracy, and thus their product
171 would be completely unacceptable in the market if it had to meet IEEE754,
172 unnecessarily.
173
174 An "Embedded 3D" GPU has radically different performance, power
175 and die-area requirements (and may even target SoftCores in FPGA).
176 Sharing of the silicon to cover multi-function uses (CORDIC for example)
177 is absolutely essential in order to keep cost and power down, and high
178 performance simply is not. Multi-cycle FSMs instead of pipelines may
179 be considered acceptable, and so on. Subsets of functionality are
180 also essential.
181
182 An "Embedded Numerical" platform has requirements that are separate and
183 distinct from all of the above!
184
185 Mobile Computing needs (tablets, smartphones) again pull in a different
186 direction: high performance, reasonable accuracy, but efficiency is
187 critical. Screen sizes are not at the 4K range: they are within the
188 800x600 range at the low end (320x240 at the extreme budget end), and
189 only the high-performance smartphones and tablets provide 1080p (1920x1080).
190 With lower resolution, accuracy compromises are possible which the Desktop
191 market (4k and soon to be above) would find unacceptable.
192
193 Meeting these disparate markets may be achieved, again, through
194 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
195 to that, subdividing the extension into subsets that best suit the different
196 market areas.
197
198 **Software requirements**:
199
200 A "custom" extension is developed in near-complete isolation from the
201 rest of the RISC-V Community. Cost savings to the Corporation are
202 large, with no direct beneficial feedback to (or impact on) the rest
203 of the RISC-V ecosystem.
204
205 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
206 OpenCL - users have much more influence than first appears. Compliance
207 with these standards is critical as the userbase (Games writers,
208 scientific applications) expects not to have to rewrite extremely large
209 and costly codebases to conform with *non-standards-compliant* hardware.
210
211 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX)
212 is paramount, and compliance with Trademarked Standards is critical.
213 Any deviation from Trademarked Standards means that an implementation
214 may not be sold and also make a claim of being, for example, "Vulkan
215 compatible".
216
217 For 3D, this in turn reinforces and makes a hard requirement a need for public
218 compliance with such standards, over-and-above what would otherwise be
219 set by a RISC-V Standards Development Process, including both the
220 software compliance and the knock-on implications that has for hardware.
221
222 For libraries such as libm and numpy, accuracy is paramount, for software interoperability across multiple platforms. Some algorithms critically rely on correct IEEE754, for example.
223 The conflicting accuracy requirements can be met through the zfpacc extension.
224
225 **Collaboration**:
226
227 The case for collaboration on any Extension is already well-known.
228 In this particular case, the precedent for inclusion of Transcendentals
229 in other ISAs, both from Graphics and High-performance Computing, has
230 these primitives well-established in high-profile software libraries and
231 compilers in both GPU and HPC Computer Science divisions. Collaboration
232 and shared public compliance with those standards brooks no argument.
233
234 The combined requirements of collaboration and multi accuracy requirements
235 mean that *overall this proposal is categorically and wholly unsuited
236 to relegation of "custom" status*.
237
238 # Quantitative Analysis <a name="analysis"></a>
239
240 This is extremely challenging. Normally, an Extension would require full,
241 comprehensive and detailed analysis of every single instruction, for every
242 single possible use-case, in every single market. The amount of silicon
243 area required would be balanced against the benefits of introducing extra
244 opcodes, as well as a full market analysis performed to see which divisions
245 of Computer Science benefit from the introduction of the instruction,
246 in each and every case.
247
248 With 34 instructions, four possible Platforms, and sub-categories of
249 implementations even within each Platform, over 136 separate and distinct
250 analyses is not a practical proposition.
251
252 A little more intelligence has to be applied to the problem space,
253 to reduce it down to manageable levels.
254
255 Fortunately, the subdivision by Platform, in combination with the
256 identification of only two primary markets (Numerical Computation and
257 3D), means that the logical reasoning applies *uniformly* and broadly
258 across *groups* of instructions rather than individually, making it a primarily
259 hardware-centric and accuracy-centric decision-making process.
260
261 In addition, hardware algorithms such as CORDIC can cover such a wide
262 range of operations (simply by changing the input parameters) that the
263 normal argument of compromising and excluding certain opcodes because they
264 would significantly increase the silicon area is knocked down.
265
266 However, CORDIC, whilst space-efficient, and thus well-suited to
267 Embedded, is an old iterative algorithm not well-suited to High-Performance
268 Computing or Mid to High-end GPUs, where commercially-competitive
269 FP32 pipeline lengths are only around 5 stages.
270
271 Not only that, but some operations such as LOG1P, which would normally
272 be excluded from one market (due to there being an alternative macro-op
273 fused sequence replacing it) are required for other markets due to
274 the higher accuracy obtainable at the lower range of input values when
275 compared to LOG(1+P).
276
277 (Thus we start to see why "proprietary" markets are excluded from this
278 proposal, because "proprietary" markets would make *hardware*-driven
279 optimisation decisions that would be completely inappropriate for a
280 common standard).
281
282 ATAN and ATAN2 is another example area in which one market's needs
283 conflict directly with another: the only viable solution, without compromising
284 one market to the detriment of the other, is to provide both opcodes
285 and let implementors make the call as to which (or both) to optimise,
286 at the *hardware* level.
287
288 Likewise it is well-known that loops involving "0 to 2 times pi", often
289 done in subdivisions of powers of two, are costly to do because they
290 involve floating-point multiplication by PI in each and every loop.
291 3D GPUs solved this by providing SINPI variants which range from 0 to 1
292 and perform the multiply *inside* the hardware itself. In the case of
293 CORDIC, it turns out that the multiply by PI is not even needed (is a
294 loop invariant magic constant).
295
296 However, some markets may not wish to *use* CORDIC, for reasons mentioned
297 above, and, again, one market would be penalised if SINPI was prioritised
298 over SIN, or vice-versa.
299
300 In essence, then, even when only the two primary markets (3D and
301 Numerical Computation) have been identified, this still leaves two
302 (three) diametrically-opposed *accuracy* sub-markets as the prime
303 conflict drivers:
304
305 * Embedded Ultra Low Power
306 * IEEE754 compliance
307 * Khronos Vulkan compliance
308
309 Thus the best that can be done is to use Quantitative Analysis to work
310 out which "subsets" - sub-Extensions - to include, provide an additional
311 "accuracy" extension, be as "inclusive" as possible, and thus allow
312 implementors to decide what to add to their implementation, and how best
313 to optimise them.
314
315 This approach *only* works due to the uniformity of the function space,
316 and is **not** an appropriate methodology for use in other Extensions
317 with huge (non-uniform) market diversity even with similarly large
318 numbers of potential opcodes. BitManip is the perfect counter-example.
319
320 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
321
322 This list shows the (direct) equivalence between proposed opcodes,
323 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
324 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
325 are present in the Khronos Extended Instruction Set.
326
327 For RISCV opcode encodings see
328 [[rv_major_opcode_1010011]]
329
330 See
331 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
332 and <https://ieeexplore.ieee.org/document/8766229>
333
334 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
335 use of the "fmt" field that is already present in the RISC-V Specification.
336 * "Native" opcodes are *not* being proposed: implementors will be expected
337 to use the (equivalent) proposed opcode covering the same function.
338 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
339 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
340 vectors (or can be done as scalar operations using other RISC-V instructions).
341
342 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
343 Deviation from conformance with the Khronos Specification - including the
344 Khronos Specification accuracy requirements - is not an option, as it
345 results in non-compliance, and the vendor may not use the Trademarked words
346 "Vulkan" etc. in conjunction with their product.
347
348 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
349 Interestingly the only functions missing when compared to OpenCL are
350 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
351
352 [[!table data="""
353 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 |
354 FSIN | sin | half\_sin | native\_sin | NONE | sin |
355 FCOS | cos | half\_cos | native\_cos | NONE | cos |
356 FTAN | tan | half\_tan | native\_tan | NONE | tan |
357 NONE (1) | sincos | NONE | NONE | NONE | NONE |
358 FASIN | asin | NONE | NONE | NONE | asin |
359 FACOS | acos | NONE | NONE | NONE | acos |
360 FATAN | atan | NONE | NONE | NONE | atan |
361 FSINPI | sinpi | NONE | NONE | NONE | sinPi |
362 FCOSPI | cospi | NONE | NONE | NONE | cosPi |
363 FTANPI | tanpi | NONE | NONE | NONE | tanPi |
364 FASINPI | asinpi | NONE | NONE | NONE | asinPi |
365 FACOSPI | acospi | NONE | NONE | NONE | acosPi |
366 FATANPI | atanpi | NONE | NONE | NONE | atanPi |
367 FSINH | sinh | NONE | NONE | NONE | sinh |
368 FCOSH | cosh | NONE | NONE | NONE | cosh |
369 FTANH | tanh | NONE | NONE | NONE | tanh |
370 FASINH | asinh | NONE | NONE | NONE | asinh |
371 FACOSH | acosh | NONE | NONE | NONE | acosh |
372 FATANH | atanh | NONE | NONE | NONE | atanh |
373 FATAN2 | atan2 | NONE | NONE | NONE | atan2 |
374 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi |
375 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt |
376 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) |
377 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 |
378 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 |
379 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 |
380 FLOG1P | log1p | NONE | NONE | NONE | logp1 |
381 FEXP | exp | half\_exp | native\_exp | NONE | exp |
382 FLOG | log | half\_log | native\_log | NONE | log |
383 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 |
384 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 |
385 FPOW | pow | NONE | NONE | NONE | pow |
386 FPOWN | pown | NONE | NONE | NONE | pown |
387 FPOWR | powr | NONE | NONE | NONE | powr |
388 FROOTN | rootn | NONE | NONE | NONE | rootn |
389 FHYPOT | hypot | NONE | NONE | NONE | hypot |
390 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) |
391 NONE | NONE | NONE | NONE | NONE | compound |
392 NONE | NONE | NONE | NONE | NONE | exp2m1 |
393 NONE | NONE | NONE | NONE | NONE | exp10m1 |
394 NONE | NONE | NONE | NONE | NONE | log2p1 |
395 NONE | NONE | NONE | NONE | NONE | log10p1 |
396 """]]
397
398 Note (1) FSINCOS is macro-op fused (see below).
399
400 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
401
402 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
403
404 ## List of 2-arg opcodes
405
406 [[!table data="""
407 opcode | Description | pseudocode | Extension |
408 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
409 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
410 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
411 FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
412 FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
413 FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
414 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
415 """]]
416
417 ## List of 1-arg transcendental opcodes
418
419 [[!table data="""
420 opcode | Description | pseudocode | Extension |
421 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
422 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
423 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
424 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
425 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
426 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
427 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
428 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
429 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
430 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
431 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
432 """]]
433
434 ## List of 1-arg trigonometric opcodes
435
436 [[!table data="""
437 opcode | Description | pseudo-code | Extension |
438 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
439 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
440 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
441 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
442 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
443 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
444 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
445 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
446 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
447 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
448 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
449 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
450 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
451 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
452 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
453 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
454 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
455 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
456 """]]
457
458 # Subsets
459
460 The full set is based on the Khronos OpenCL opcodes. If implemented
461 entirely it would be too much for both Embedded and also 3D.
462
463 The subsets are organised by hardware complexity, need (3D, HPC), however
464 due to synthesis producing inaccurate results at the range limits,
465 the less common subsets are still required for IEEE754 HPC.
466
467 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
468 following opcodes:
469
470 E8 - fatan_pt2
471 F0 - frcp (reciprocal)
472 F2 - frsqrt (inverse square root, 1/sqrt(x))
473 F3 - fsqrt (square root)
474 F4 - fexp2 (2^x)
475 F5 - flog2
476 F6 - fsin1pi
477 F7 - fcos1pi
478 F9 - fatan_pt1
479
480 These in FP32 and FP16 only: no FP32 hardware, at all.
481
482 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
483
484 sin, cos2pi
485 cos, sin2pi
486 log2, exp
487 sqrt and rsqrt
488 recip.
489
490 It also has fast variants of some of these, as a CSR Mode.
491
492 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
493 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
494
495 COS2PI (appx)
496 EXP2
497 LOG (IEEE754)
498 RECIP
499 RSQRT
500 SQRT
501 SIN2PI (appx)
502
503 AMD RDNA has F16 and F32 variants of all the above, and also has F64
504 variants of SQRT, RSQRT and RECIP. It is interesting that even the
505 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
506 does.
507
508 Also a general point, that customised optimised hardware targetting
509 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
510 for FP64 (except as a starting point for hardware or software driven
511 Newton Raphson or other iterative method).
512
513 Also in cost/area sensitive applications even the extra ROM lookup tables
514 for certain algorithms may be too costly.
515
516 These wildly differing and incompatible driving factors lead to the
517 subset subdivisions, below.
518
519 ## Transcendental Subsets
520
521 ### Zftrans
522
523 LOG2 EXP2 RECIP RSQRT
524
525 Zftrans contains the minimum standard transcendentals best suited to
526 3D. They are also the minimum subset for synthesising log10, exp10,
527 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
528
529 They are therefore considered "base" (essential) transcendentals.
530
531 ### ZftransExt
532
533 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
534
535 These are extra transcendental functions that are useful, not generally
536 needed for 3D, however for Numerical Computation they may be useful.
537
538 Although they can be synthesised using Ztrans (LOG2 multiplied
539 by a constant), there is both a performance penalty as well as an
540 accuracy penalty towards the limits, which for IEEE754 compliance is
541 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
542 accuracy at the lower end (very small rs1) than LOG(rs1).
543
544 Their forced inclusion would be inappropriate as it would penalise
545 embedded systems with tight power and area budgets. However if they
546 were completely excluded the HPC applications would be penalised on
547 performance and accuracy.
548
549 Therefore they are their own subset extension.
550
551 ### Zfhyp
552
553 SINH, COSH, TANH, ASINH, ACOSH, ATANH
554
555 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D is limited.
556
557 They can all be synthesised using LOG, SQRT and so on, so depend
558 on Zftrans. However, once again, at the limits of the range, IEEE754
559 compliance becomes impossible, and thus a hardware implementation may
560 be required.
561
562 HPC and high-end GPUs are likely markets for these.
563
564 ### ZftransAdv
565
566 CBRT, POW, POWN, POWR, ROOTN
567
568 These are simply much more complex to implement in hardware, and typically
569 will only be put into HPC applications.
570
571 * **Zfrsqrt**: Reciprocal square-root.
572
573 ## Trigonometric subsets
574
575 ### Ztrigpi vs Ztrignpi
576
577 * **Ztrigpi**: SINPI COSPI TANPI
578 * **Ztrignpi**: SIN COS TAN
579
580 Ztrignpi are the basic trigonometric functions through which all others
581 could be synthesised, and they are typically the base trigonometrics
582 provided by GPUs for 3D, warranting their own subset.
583
584 In the case of the Ztrigpi subset, these are commonly used in for loops
585 with a power of two number of subdivisions, and the cost of multiplying
586 by PI inside each loop (or cumulative addition, resulting in cumulative
587 errors) is not acceptable.
588
589 In for example CORDIC the multiplication by PI may be moved outside of
590 the hardware algorithm as a loop invariant, with no power or area penalty.
591
592 Again, therefore, if SINPI (etc.) were excluded, programmers would be penalised by being forced to divide by PI in some circumstances. Likewise if SIN were excluded, programmers would be penaslised by being forced to *multiply* by PI in some circumstances.
593
594 Thus again, a slightly different application of the same general argument applies to give Ztrignpi and
595 Ztrigpi as subsets. 3D GPUs will almost certainly provide both.
596
597 ### Zarctrigpi and Zarctrignpi
598
599 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
600 * **Zarctrignpi**: ATAN2 ACOS ASIN
601
602 These are extra trigonometric functions that are useful in some
603 applications, but even for 3D GPUs, particularly embedded and mobile class
604 GPUs, they are not so common and so are typically synthesised, there.
605
606 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
607 once again, both a performance penalty as well as an accuracy penalty
608 towards the limits, which for IEEE754 compliance is unacceptable, yet
609 is acceptable for 3D.
610
611 Therefore they are their own subset extensions.
612
613 # Synthesis, Pseudo-code ops and macro-ops
614
615 The pseudo-ops are best left up to the compiler rather than being actual
616 pseudo-ops, by allocating one scalar FP register for use as a constant
617 (loop invariant) set to "1.0" at the beginning of a function or other
618 suitable code block.
619
620 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
621 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
622
623 FATANPI example pseudo-code:
624
625 lui t0, 0x3F800 // upper bits of f32 1.0
626 fmv.x.s ft0, t0
627 fatan2pi.s rd, rs1, ft0
628
629 Hyperbolic function example (obviates need for Zfhyp except for
630 high-performance or correctly-rounding):
631
632 ASINH( x ) = ln( x + SQRT(x**2+1))
633
634 # Evaluation and commentary
635
636 This section will move later to discussion.
637
638 ## Reciprocal
639
640 Used to be an alias. Some implementors may wish to implement divide as
641 y times recip(x).
642
643 Others may have shared hardware for recip and divide, others may not.
644
645 To avoid penalising one implementor over another, recip stays.
646
647 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
648
649 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
650 Research needed to ensure that implementors are not compromised by such
651 a decision
652 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
653
654 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
655 > > Likewise for EXP and EXPM1
656
657 > ok, they stay in as real opcodes, then.
658
659 ## ATAN / ATAN2 commentary
660
661 Discussion starts here:
662 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
663
664 from Mitch Alsup:
665
666 would like to point out that the general implementations of ATAN2 do a
667 bunch of special case checks and then simply call ATAN.
668
669 double ATAN2( double y, double x )
670 { // IEEE 754-2008 quality ATAN2
671
672 // deal with NANs
673 if( ISNAN( x ) ) return x;
674 if( ISNAN( y ) ) return y;
675
676 // deal with infinities
677 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
678 if( x == +∞ ) return copysign( 0.0, y );
679 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
680 if( x == -∞ ) return copysign( π, y );
681 if( |y|== +∞ ) return copysign( π/2, y );
682
683 // deal with signed zeros
684 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
685 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
686 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
687
688 // calculate ATAN2 textbook style
689 if( x > 0.0 ) return ATAN( |y / x| );
690 if( x < 0.0 ) return π - ATAN( |y / x| );
691 }
692
693
694 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
695 a constant and then call/use ATAN2.
696
697 When one considers an implementation of ATAN, one must consider several
698 ranges of evaluation::
699
700 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
701 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
702 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
703
704 I should point out that the add/sub of π/2 can not lose significance
705 since the result of ATAN(1/x) is bounded 0..π/2
706
707 The bottom line is that I think you are choosing to make too many of
708 these into OpCodes, making the hardware function/calculation unit (and
709 sequencer) more complicated that necessary.
710
711 --------------------------------------------------------
712
713 We therefore I think have a case for bringing back ATAN and including ATAN2.
714
715 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
716
717 (that is the hypothesis, to be evaluated for correctness. feedback requested).
718
719 This because we cannot compromise or prioritise one platfrom's
720 speed/accuracy over another. That is not reasonable or desirable, to
721 penalise one implementor over another.
722
723 Thus, all implementors, to keep interoperability, must both have both
724 opcodes and may choose, at the architectural and routing level, which
725 one to implement in terms of the other.
726
727 Allowing implementors to choose to add either opcode and let traps sort it
728 out leaves an uncertainty in the software developer's mind: they cannot
729 trust the hardware, available from many vendors, to be performant right
730 across the board.
731
732 Standards are a pig.
733
734 ---
735
736 I might suggest that if there were a way for a calculation to be performed
737 and the result of that calculation chained to a subsequent calculation
738 such that the precision of the result-becomes-operand is wider than
739 what will fit in a register, then you can dramatically reduce the count
740 of instructions in this category while retaining
741
742 acceptable accuracy:
743
744 z = x / y
745
746 can be calculated as::
747
748 z = x * (1/y)
749
750 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
751 accurate, but GPUs want speed and
752
753 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
754 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
755
756 Given that one has the ability to carry (and process) more fraction bits,
757 one can then do high precision multiplies of π or other transcendental
758 radixes.
759
760 And GPUs have been doing this almost since the dawn of 3D.
761
762 // calculate ATAN2 high performance style
763 // Note: at this point x != y
764 //
765 if( x > 0.0 )
766 {
767 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
768 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
769 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
770 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
771 }
772 if( x < 0.0 )
773 {
774 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
775 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
776 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
777 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
778 }
779
780 This way the adds and subtracts from the constant are not in a precision
781 precarious position.