(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 With thanks to:
4
5 * Jacob Lifshay
6 * Dan Petroski
7 * Mitch Alsup
8 * Allen Baum
9 * Andrew Waterman
10 * Luis Vitorio Cargnini
11
12 [[!toc levels=2]]
13
14 See:
15
16 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
17 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
18 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
19 * [[rv_major_opcode_1010011]] for opcode listing.
20 * [[zfpacc_proposal]] for accuracy settings proposal
21
22 Extension subsets:
23
24 * **Zftrans**: standard transcendentals (best suited to 3D)
25 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
26 can be synthesised using Ztrans)
27 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
28 * **Ztrignpi**: trig non-xxx-pi sin cos tan
29 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
30 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
31 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
32 acosh, atanh (can be synthesised - see below)
33 * **ZftransAdv**: much more complex to implement in hardware
34 * **Zfrsqrt**: Reciprocal square-root.
35
36 Minimum recommended requirements for 3D: Zftrans, Ztrigpi, Ztrignpi, Zarctrigpi,
37 Zarctrignpi
38
39 Minimum recommended requirements for Mobile-Embedded 3D: Ztrigpi, Zftrans, Ztrignpi
40
41 # TODO:
42
43 * Decision on accuracy, moved to [[zfpacc_proposal]]
44 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
45 * Errors **MUST** be repeatable.
46 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
47 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
48 Accuracy requirements for dual (triple) purpose implementations must
49 meet the higher standard.
50 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
51 it is desirable on its own by other implementors. This to be evaluated.
52
53 # Requirements <a name="requirements"></a>
54
55 This proposal is designed to meet a wide range of extremely diverse needs,
56 allowing implementors from all of them to benefit from the tools and hardware
57 cost reductions associated with common standards adoption.
58
59 **There are *four* different, disparate platform's needs (two new)**:
60
61 * 3D Embedded Platform (new)
62 * Embedded Platform
63 * 3D UNIX Platform (new)
64 * UNIX Platform
65
66 **The use-cases are**:
67
68 * 3D GPUs
69 * Numerical Computation
70 * (Potentially) A.I. / Machine-learning (1)
71
72 (1) although approximations suffice in this field, making it more likely
73 to use a custom extension. High-end ML would inherently definitely
74 be excluded.
75
76 **The power and die-area requirements vary from**:
77
78 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
79 * Mobile-Embedded (good performance with high efficiency for battery life)
80 * Desktop Computing
81 * Server / HPC (2)
82
83 (2) Supercomputing is left out of the requirements as it is traditionally
84 covered by Supercomputer Vectorisation Standards (such as RVV).
85
86 **The software requirements are**:
87
88 * Full public integration into GNU math libraries (libm)
89 * Full public integration into well-known Numerical Computation systems (numpy)
90 * Full public integration into upstream GNU and LLVM Compiler toolchains
91 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
92 seeking public Certification and Endorsement from the Khronos Group
93 under their Trademarked Certification Programme.
94
95 **The "contra"-requirements are**:
96
97 * The requirements are **not** for the purposes of developing a full custom
98 proprietary GPU with proprietary firmware
99 driven by *hardware* centric optimised design decisions as a priority over collaboration.
100 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
101 this proposal however the fact that they typically develop proprietary
102 software that is not shared with the rest of the community likely to
103 use this proposal means that they have completely different needs.
104 * This proposal is for *sharing* of effort in reducing development costs
105
106 # Requirements Analysis <a name="requirements_analysis"></a>
107
108 **Platforms**:
109
110 3D Embedded will require significantly less accuracy and will need to make
111 power budget and die area compromises that other platforms (including Embedded)
112 will not need to make.
113
114 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
115 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
116 IEEE754 compliance is a hard requirement that would compromise power
117 and efficiency on a 3D UNIX Platform.
118
119 Even in the Embedded platform, IEEE754 interoperability is beneficial,
120 where if it was a hard requirement the 3D Embedded platform would be severely
121 compromised in its ability to meet the demanding power budgets of that market.
122
123 Thus, learning from the lessons of
124 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
125 this proposal works in conjunction with the [[zfpacc_proposal]], so as
126 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
127
128 **Use-cases**:
129
130 There really is little else in the way of suitable markets. 3D GPUs
131 have extremely competitive power-efficiency and power-budget requirements
132 that are completely at odds with the other market at the other end of
133 the spectrum: Numerical Computation.
134
135 Interoperability in Numerical Computation is absolutely critical: it implies (correlates directly with)
136 IEEE754 compliance. However full IEEE754 compliance automatically and
137 inherently penalises a GPU on performance and die area, where accuracy is simply just not necessary.
138
139 To meet the needs of both markets, the two new platforms have to be created,
140 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
141 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
142 compliance *and* 3D performance in a single ASIC.
143
144 **Power and die-area requirements**:
145
146 This is where the conflicts really start to hit home.
147
148 A "Numerical High performance only" proposal (suitable for Server / HPC
149 only) would customise and target the Extension based on a quantitative
150 analysis of the value of certain opcodes *for HPC only*. It would
151 conclude, reasonably and rationally, that it is worthwhile adding opcodes
152 to RVV as parallel Vector operations, and that further discussion of
153 the matter is pointless.
154
155 A "Proprietary GPU effort" (even one that was intended for publication
156 of its API through, for example, a public libre-licensed Vulkan SPIR-V
157 Compiler) would conclude, reasonably and rationally, that, likewise, the
158 opcodes were best suited to be added to RVV, and, further, that their
159 requirements conflict with the HPC world, due to the reduced accuracy.
160 This on the basis that the silicon die area required for IEEE754 is far
161 greater than that needed for reduced-accuracy, and thus their product would
162 be completely unacceptable in the market if it had to meet IEEE754, unnecessarily.
163
164 An "Embedded 3D" GPU has radically different performance, power
165 and die-area requirements (and may even target SoftCores in FPGA).
166 Sharing of the silicon to cover multi-function uses (CORDIC for example)
167 is absolutely essential in order to keep cost and power down, and high
168 performance simply is not. Multi-cycle FSMs instead of pipelines may
169 be considered acceptable, and so on. Subsets of functionality are
170 also essential.
171
172 An "Embedded Numerical" platform has requirements that are separate and
173 distinct from all of the above!
174
175 Mobile Computing needs (tablets, smartphones) again pull in a different
176 direction: high performance, reasonable accuracy, but efficiency is
177 critical. Screen sizes are not at the 4K range: they are within the
178 800x600 range at the low end (320x240 at the extreme budget end), and
179 only the high-performance smartphones and tablets provide 1080p (1920x1080).
180 With lower resolution, accuracy compromises are possible which the Desktop
181 market (4k and soon to be above) would find unacceptable.
182
183 Meeting these disparate markets may be achieved, again, through
184 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
185 to that, subdividing the extension into subsets that best suit the different
186 market areas.
187
188 **Software requirements**:
189
190 A "custom" extension is developed in near-complete isolation from the
191 rest of the RISC-V Community. Cost savings to the Corporation are
192 large, with no direct beneficial feedback to (or impact on) the rest
193 of the RISC-V ecosystem.
194
195 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
196 OpenCL - users have much more influence than first appears. Compliance
197 with these standards is critical as the userbase (Games writers, scientific
198 applications) expects not to have to rewrite extremely large and costly codebases to conform
199 with *non-standards-compliant* hardware.
200
201 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX) is paramount, and compliance with
202 Trademarked Standards is critical. Any deviation from Trademarked Standards
203 means that an implementation may not be sold and also make a claim of being,
204 for example, "Vulkan compatible".
205
206 This in turn reinforces and makes a hard requirement a need for public
207 compliance with such standards, over-and-above what would otherwise be
208 set by a RISC-V Standards Development Process, including both the
209 software compliance and the knock-on implications that has for hardware.
210
211 **Collaboration**:
212
213 The case for collaboration on any Extension is already well-known.
214 In this particular case, the precedent for inclusion of Transcendentals
215 in other ISAs, both from Graphics and High-performance Computing, has
216 these primitives well-established in high-profile software libraries and
217 compilers in both GPU and HPC Computer Science divisions. Collaboration
218 and shared public compliance with those standards brooks no argument.
219
220 The combined requirements of collaboration and multi accuracy requirements mean that
221 *overall this proposal is categorically and wholly unsuited to
222 relegation of "custom" status*.
223
224 # Quantitative Analysis <a name="analysis"></a>
225
226 This is extremely challenging. Normally, an Extension would require full,
227 comprehensive and detailed analysis of every single instruction, for every
228 single possible use-case, in every single market. The amount of silicon
229 area required would be balanced against the benefits of introducing extra
230 opcodes, as well as a full market analysis performed to see which divisions
231 of Computer Science benefit from the introduction of the instruction,
232 in each and every case.
233
234 With 34 instructions, four possible Platforms, and sub-categories of
235 implementations even within each Platform, over 136 separate and distinct
236 analyses is not a practical proposition.
237
238 A little more intelligence has to be applied to the problem space,
239 to reduce it down to manageable levels.
240
241 Fortunately, the subdivision by Platform, in combination with the
242 identification of only two primary markets (Numerical Computation and
243 3D), means that the logical reasoning applies *uniformly* and broadly
244 across *groups* of instructions rather than individually, making it a primarily
245 hardware-centric and accuracy-centric decision-making process.
246
247 In addition, hardware algorithms such as CORDIC can cover such a wide
248 range of operations (simply by changing the input parameters) that the
249 normal argument of compromising and excluding certain opcodes because they
250 would significantly increase the silicon area is knocked down.
251
252 However, CORDIC, whilst space-efficient, and thus well-suited to
253 Embedded, is an old iterative algorithm not well-suited to High-Performance
254 Computing or Mid to High-end GPUs, where commercially-competitive
255 FP32 pipeline lengths are only around 5 stages.
256
257 Not only that, but some operations such as LOG1P, which would normally
258 be excluded from one market (due to there being an alternative macro-op
259 fused sequence replacing it) are required for other markets due to
260 the higher accuracy obtainable at the lower range of input values when
261 compared to LOG(1+P).
262
263 (Thus we start to see why "proprietary" markets are excluded from this
264 proposal, because "proprietary" markets would make *hardware*-driven
265 optimisation decisions that would be completely inappropriate for a
266 common standard).
267
268 ATAN and ATAN2 is another example area in which one market's needs
269 conflict directly with another: the only viable solution, without compromising
270 one market to the detriment of the other, is to provide both opcodes
271 and let implementors make the call as to which (or both) to optimise,
272 at the *hardware* level.
273
274 Likewise it is well-known that loops involving "0 to 2 times pi", often
275 done in subdivisions of powers of two, are costly to do because they
276 involve floating-point multiplication by PI in each and every loop.
277 3D GPUs solved this by providing SINPI variants which range from 0 to 1
278 and perform the multiply *inside* the hardware itself. In the case of
279 CORDIC, it turns out that the multiply by PI is not even needed (is a
280 loop invariant magic constant).
281
282 However, some markets may not wish to *use* CORDIC, for reasons mentioned
283 above, and, again, one market would be penalised if SINPI was prioritised
284 over SIN, or vice-versa.
285
286 In essence, then, even when only the two primary markets (3D and Numerical Computation) have been identified, this still leaves two (three) diametrically-opposed *accuracy* sub-markets as the prime conflict drivers:
287
288 * Embedded Ultra Low Power
289 * IEEE754 compliance
290 * Khronos Vulkan compliance
291
292 Thus the best that can be done is to use Quantitative Analysis to work
293 out which "subsets" - sub-Extensions - to include, provide an additional "accuracy" extension, be as "inclusive"
294 as possible, and thus allow implementors to decide what to add to their
295 implementation, and how best to optimise them.
296
297 This approach *only* works due to the uniformity of the function space,
298 and is **not** an appropriate methodology for use in other Extensions
299 with huge (non-uniform) market diversity even with similarly large numbers of potential opcodes.
300 BitManip is the perfect counter-example.
301
302 # Proposed Opcodes vs Khronos OpenCL Opcodes <a name="khronos_equiv"></a>
303
304 This list shows the (direct) equivalence between proposed opcodes and
305 their Khronos OpenCL equivalents.
306
307 See
308 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
309
310 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
311 use of the "fmt" field that is already present in the RISC-V Specification.
312 * "Native" opcodes are *not* being proposed: implementors will be expected
313 to use the (equivalent) proposed opcode covering the same function.
314 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
315 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
316 vectors (or can be done as scalar operations using other RISC-V instructions).
317
318 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
319 Deviation from conformance with the Khronos Specification - including the
320 Khronos Specification accuracy requirements - is not an option, as it
321 results in non-compliance, and the vendor may not use the Trademarked words
322 "Vulkan" etc. in conjunction with their product.
323
324 [[!table data="""
325 Proposed opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast |
326 FSIN | sin | half\_sin | native\_sin | NONE |
327 FCOS | cos | half\_cos | native\_cos | NONE |
328 FTAN | tan | half\_tan | native\_tan | NONE |
329 NONE (1) | sincos | NONE | NONE | NONE |
330 FASIN | asin | NONE | NONE | NONE |
331 FACOS | acos | NONE | NONE | NONE |
332 FATAN | atan | NONE | NONE | NONE |
333 FSINPI | sinpi | NONE | NONE | NONE |
334 FCOSPI | cospi | NONE | NONE | NONE |
335 FTANPI | tanpi | NONE | NONE | NONE |
336 FASINPI | asinpi | NONE | NONE | NONE |
337 FACOSPI | acospi | NONE | NONE | NONE |
338 FATANPI | atanpi | NONE | NONE | NONE |
339 FSINH | sinh | NONE | NONE | NONE |
340 FCOSH | cosh | NONE | NONE | NONE |
341 FTANH | tanh | NONE | NONE | NONE |
342 FASINH | asinh | NONE | NONE | NONE |
343 FACOSH | acosh | NONE | NONE | NONE |
344 FATANH | atanh | NONE | NONE | NONE |
345 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE |
346 FCBRT | cbrt | NONE | NONE | NONE |
347 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE |
348 FLOG2 | log2 | half\_log2 | native\_log2 | NONE |
349 FEXPM1 | expm1 | NONE | NONE | NONE |
350 FLOG1P | log1p | NONE | NONE | NONE |
351 FEXP | exp | half\_exp | native\_exp | NONE |
352 FLOG | log | half\_log | native\_log | NONE |
353 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE |
354 FLOG10 | log10 | half\_log10 | native\_log10 | NONE |
355 FATAN2 | atan2 | NONE | NONE | NONE |
356 FATAN2PI | atan2pi | NONE | NONE | NONE |
357 FPOW | pow | NONE | NONE | NONE |
358 FROOT | rootn | NONE | NONE | NONE |
359 FHYPOT | hypot | NONE | NONE | NONE |
360 FRECIP | NONE | half\_recip | native\_recip | NONE |
361 """]]
362
363 Note (1) FSINCOS is macro-op fused (see below).
364
365 # List of 2-arg opcodes
366
367 [[!table data="""
368 opcode | Description | pseudocode | Extension |
369 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
370 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
371 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
372 FROOT | x power 1/y | rd = pow(rs1, 1/rs2) | ZftransAdv |
373 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
374 """]]
375
376 # List of 1-arg transcendental opcodes
377
378 [[!table data="""
379 opcode | Description | pseudocode | Extension |
380 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
381 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
382 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
383 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
384 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
385 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
386 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
387 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
388 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
389 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
390 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
391 """]]
392
393 # List of 1-arg trigonometric opcodes
394
395 [[!table data="""
396 opcode | Description | pseudo-code | Extension |
397 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
398 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
399 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
400 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
401 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
402 FATAN (1) | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
403 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
404 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
405 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
406 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
407 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
408 FATANPI (1) | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
409 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
410 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
411 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
412 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
413 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
414 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
415 """]]
416
417 Note (1): FATAN/FATANPI is a pseudo-op expanding to FATAN2/FATAN2PI (needs deciding)
418
419 # Subsets
420
421 The full set is based on the Khronos OpenCL opcodes. If implemented entirely it would be too much for both Embedded and also 3D.
422
423 The subsets are organised by hardware complexity, need (3D, HPC), however due to synthesis producing inaccurate results at the range limits, the less common subsets are still required for IEEE754 HPC.
424
425 MALI Midgard, an embedded / mobile 3D GPU, for example only has the following opcodes:
426
427 E8 - fatan_pt2
428 F0 - frcp (reciprocal)
429 F2 - frsqrt (inverse square root, 1/sqrt(x))
430 F3 - fsqrt (square root)
431 F4 - fexp2 (2^x)
432 F5 - flog2
433 F6 - fsin
434 F7 - fcos
435 F9 - fatan_pt1
436
437 These in FP32 and FP16 only: no FP32 hardware, at all.
438
439 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
440
441 sin, cos2pi
442 cos, sin2pi
443 log2, exp
444 sqrt and rsqrt
445 recip.
446
447 It also has fast variants of some of these, as a CSR Mode.
448
449 Also a general point, that customised optimised hardware targetting FP32 3D with less accuracy simply can neither be used for IEEE754 nor for FP64 (except as a starting point for hardware or software driven Newton Raphson or other iterative method).
450
451 Also in cost/area sensitive applications even the extra ROM lookup tables for certain algorithms may be too costly.
452
453 These wildly differing and incompatible driving factors lead to the subset subdivisions, below.
454
455 ## Zftrans
456
457 Zftrans contains the minimum standard transcendentals best suited to 3D: log2, exp2, recip, rsqrt. They are also the minimum subset for synthesising log10, exp10, exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
458
459 ## ZftransExt
460
461 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
462
463 These are extra transcendental functions that are useful, not generally needed for 3D, however for Numerical Computation they may be useful.
464
465 Although they can be synthesised using Ztrans (LOG2 multiplied by a constant), there is both a performance penalty as well as an accuracy penalty towards the limits, which for IEEE754 compliance is unacceptable. In particular, LOG(1+rs1) in hardware
466 may give much better accuracy at the lower end (very small rs1) than LOG(rs1).
467
468 Their forced inclusion would be inappropriate as it would penalise embedded systems with tight power and area budgets. However if they were completely excluded the HPC applications would be penalised on performance and accuracy.
469
470 Therefore they are their own subset extension.
471
472 ## Ztrigpi vs Ztrignpi
473
474 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
475 * **Ztrignpi**: trig non-xxx-pi sin cos tan
476
477 Ztrignpi are the basic trigonometric functions through which all others could be synthesised, and they are typically the base trigonometrics provided by GPUs for 3D, warranting their own subset.
478
479 However as can be correspondingly seen from other sections, there is an accuracy penalty for doing so which will not be acceptable for IEEE754 compliance.
480
481 In the case of the Ztrigpi subset, these are commonly used in for loops with a power of two number of subdivisions, and the cost of multiplying by PI inside each loop (or cumulative addition, resulting in cumulative errors) is not acceptable.
482
483 In for example CORDIC the multiplication by PI may be moved outside of the hardware algorithm as a loop invariant, with no power or area penalty.
484
485 Thus again, the same general argument applies to give Ztrignpi and Ztrigpi as subsets.
486
487 ## Zarctrigpi and Zarctrignpi
488
489 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
490 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
491
492 These are extra trigonometric functions that are useful in some applications, but even for 3D GPUs, particularly embedded and mobile class GPUs, they are not so common and so are synthesised, there.
493
494 Although they can be synthesised using Ztrigpi and Ztrignpi, there is, once again, both a performance penalty as well as an accuracy penalty towards the limits, which for IEEE754 compliance is unacceptable, yet is acceptable for 3D.
495
496 Therefore they are their own subset extension.
497
498 ## Zfhyp
499
500 These are the hyperbolic/inverse-hyperbolic finctions: sinh, cosh, tanh, asinh, acosh, atanh. Their use in 3D is limited.
501
502 They can all be synthesised using LOG, SQRT and so on, so depend on Zftrans.
503 However, once again, at the limits of the range, IEEE754 compliance becomes impossible, and thus a hardware implementation may be required.
504
505 HPC and high-end GPUs are likely markets for these.
506
507 ## ZftransAdv
508
509 Cube-root, Power, Root: these are simply much more complex to implement in hardware, and typically will only be put into HPC applications.
510
511 Root is included as well as Power because at the extreme ranges one is more accurate than the other.
512
513 * **Zfrsqrt**: Reciprocal square-root.
514
515 # Synthesis, Pseudo-code ops and macro-ops
516
517 The pseudo-ops are best left up to the compiler rather than being actual
518 pseudo-ops, by allocating one scalar FP register for use as a constant
519 (loop invariant) set to "1.0" at the beginning of a function or other
520 suitable code block.
521
522 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
523 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
524
525 FATANPI example pseudo-code:
526
527 lui t0, 0x3F800 // upper bits of f32 1.0
528 fmv.x.s ft0, t0
529 fatan2pi.s rd, rs1, ft0
530
531 Hyperbolic function example (obviates need for Zfhyp except for
532 high-performance or correctly-rounding):
533
534 ASINH( x ) = ln( x + SQRT(x**2+1))
535
536 # Reciprocal
537
538 Used to be an alias. Some implementors may wish to implement divide as y times recip(x).
539
540 Others may have shared hardware for recip and divide, others may not.
541
542 To avoid penalising one implementor over another, recip stays.
543
544 # To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
545
546 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
547 Research needed to ensure that implementors are not compromised by such
548 a decision
549 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
550
551 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
552 > > Likewise for EXP and EXPM1
553
554 > ok, they stay in as real opcodes, then.
555
556 # ATAN / ATAN2 commentary
557
558 Discussion starts here:
559 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
560
561 from Mitch Alsup:
562
563 would like to point out that the general implementations of ATAN2 do a
564 bunch of special case checks and then simply call ATAN.
565
566 double ATAN2( double y, double x )
567 { // IEEE 754-2008 quality ATAN2
568
569 // deal with NANs
570 if( ISNAN( x ) ) return x;
571 if( ISNAN( y ) ) return y;
572
573 // deal with infinities
574 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
575 if( x == +∞ ) return copysign( 0.0, y );
576 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
577 if( x == -∞ ) return copysign( π, y );
578 if( |y|== +∞ ) return copysign( π/2, y );
579
580 // deal with signed zeros
581 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
582 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
583 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
584
585 // calculate ATAN2 textbook style
586 if( x > 0.0 ) return ATAN( |y / x| );
587 if( x < 0.0 ) return π - ATAN( |y / x| );
588 }
589
590
591 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
592 a constant and then call/use ATAN2.
593
594 When one considers an implementation of ATAN, one must consider several
595 ranges of evaluation::
596
597 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
598 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
599 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
600
601 I should point out that the add/sub of π/2 can not lose significance
602 since the result of ATAN(1/x) is bounded 0..π/2
603
604 The bottom line is that I think you are choosing to make too many of
605 these into OpCodes, making the hardware function/calculation unit (and
606 sequencer) more complicated that necessary.
607
608 --------------------------------------------------------
609
610 We therefore I think have a case for bringing back ATAN and including ATAN2.
611
612 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
613
614 (that is the hypothesis, to be evaluated for correctness. feedback requested).
615
616 Thie because we cannot compromise or prioritise one platfrom's speed/accuracy over another. That is not reasonable or desirable, to penalise one implementor over another.
617
618 Thus, all implementors, to keep interoperability, must both have both opcodes and may choose, at the architectural and routing level, which one to implement in terms of the other.
619
620 Allowing implementors to choose to add either opcode and let traps sort it out leaves an uncertainty in the software developer's mind: they cannot trust the hardware, available from many vendors, to be performant right across the board.
621
622 Standards are a pig.
623
624 ---
625
626 I might suggest that if there were a way for a calculation to be performed
627 and the result of that calculation chained to a subsequent calculation
628 such that the precision of the result-becomes-operand is wider than
629 what will fit in a register, then you can dramatically reduce the count
630 of instructions in this category while retaining
631
632 acceptable accuracy:
633
634 z = x / y
635
636 can be calculated as::
637
638 z = x * (1/y)
639
640 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
641 accurate, but GPUs want speed and
642
643 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
644 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
645
646 Given that one has the ability to carry (and process) more fraction bits,
647 one can then do high precision multiplies of π or other transcendental
648 radixes.
649
650 And GPUs have been doing this almost since the dawn of 3D.
651
652 // calculate ATAN2 high performance style
653 // Note: at this point x != y
654 //
655 if( x > 0.0 )
656 {
657 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
658 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
659 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
660 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
661 }
662 if( x < 0.0 )
663 {
664 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
665 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
666 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
667 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
668 }
669
670 This way the adds and subtracts from the constant are not in a precision
671 precarious position.