add analysis link;
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 See:
4
5 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
6 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
7 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
8 * [[rv_major_opcode_1010011]] for opcode listing.
9 * [[zfpacc_proposal]] for accuracy settings proposal
10
11 Extension subsets:
12
13 * **Zftrans**: standard transcendentals (best suited to 3D)
14 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
15 can be synthesised using Ztrans)
16 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
17 * **Ztrignpi**: trig non-xxx-pi sin cos tan
18 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
19 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
20 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
21 acosh, atanh (can be synthesised - see below)
22 * **ZftransAdv**: much more complex to implement in hardware
23 * **Zfrsqrt**: Reciprocal square-root.
24
25 Minimum recommended requirements for 3D: Zftrans, Ztrigpi, Zarctrigpi,
26 Zarctrignpi
27
28 [[!toc levels=2]]
29
30 # TODO:
31
32 * Decision on accuracy, moved to [[zfpacc_proposal]]
33 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
34 * Errors **MUST** be repeatable.
35 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
36 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
37 Accuracy requirements for dual (triple) purpose implementations must
38 meet the higher standard.
39 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
40 it is desirable on its own by other implementors. This to be evaluated.
41
42 # Requirements <a name="requirements"></a>
43
44 This proposal is designed to meet a wide range of extremely diverse needs,
45 allowing implementors from all of them to benefit from the tools and hardware
46 cost reductions associated with common standards adoption.
47
48 **There are *four* different, disparate platform's needs (two new)**:
49
50 * 3D Embedded Platform
51 * Embedded Platform
52 * 3D UNIX Platform
53 * UNIX Platform
54
55 **The use-cases are**:
56
57 * 3D GPUs
58 * Numerical Computation
59 * (Potentially) A.I. / Machine-learning (1)
60
61 (1) although approximations suffice in this field, making it more likely
62 to use a custom extension. High-end ML would inherently definitely
63 be excluded.
64
65 **The power and die-area requirements vary from**:
66
67 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
68 * Mobile-Embedded (good performance with high efficiency for battery life)
69 * Desktop Computing
70 * Server / HPC (2)
71
72 (2) Supercomputing is left out of the requirements as it is traditionally
73 covered by Supercomputer Vectorisation Standards (such as RVV).
74
75 **The software requirements are**:
76
77 * Full public integration into GNU math libraries (libm)
78 * Full public integration into well-known Numerical Computation systems (numpy)
79 * Full public integration into upstream GNU and LLVM Compiler toolchains
80 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
81 seeking public Certification and Endorsement from the Khronos Group
82 under their Trademarked Certification Programme.
83
84 **The "contra"-requirements are**:
85
86 * The requirements are **not** for the purposes of developing a full custom
87 proprietary GPU with proprietary firmware.
88 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
89 this proposal however the fact that they typically develop proprietary
90 software that is not shared with the rest of the community likely to
91 use this proposal means that they have completely different needs.
92 * This proposal is for *sharing* of effort in reducing development costs
93
94 # Requirements Analysis <a name="requirements_analysis"></a>
95
96 **Platforms**:
97
98 3D Embedded will require significantly less accuracy and will need to make
99 power budget and die area compromises that other platforms (including Embedded)
100 will not need to make.
101
102 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
103 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
104 IEEE754 compliance is a hard requirement that would compromise power
105 and efficiency on a 3D UNIX Platform.
106
107 Even in the Embedded platform, IEEE754 interoperability is beneficial,
108 where if it was a hard requirement the 3D Embedded platform would be severely
109 compromised in its ability to meet the demanding power budgets of that market.
110
111 Thus, learning from the lessons of
112 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
113 this proposal works in conjunction with the [[zfpacc_proposal]], so as
114 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
115
116 **Use-cases**:
117
118 There really is little else in the way of suitable markets. 3D GPUs
119 have extremely competitive power-efficiency and power-budget requirements
120 that are completely at odds with the other market at the other end of
121 the spectrum: Numerical Computation.
122
123 Interoperability in Numerical Computation is absolutely critical: it implies
124 IEEE754 compliance. However full IEEE754 compliance automatically and
125 inherently penalises a GPU, where accuracy is simply just not necessary.
126
127 To meet the needs of both markets, the two new platforms have to be created,
128 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
129 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
130 compliance *and* 3D performance in a single ASIC.
131
132 **Power and die-area requirements**:
133
134 This is where the conflicts really start to hit home.
135
136 A "Numerical High performance only" proposal (suitable for Server / HPC
137 only) would customise and target the Extension based on a quantitative
138 analysis of the value of certain opcodes *for HPC only*. It would
139 conclude, reasonably and rationally, that it is worthwhile adding opcodes
140 to RVV as parallel Vector operations, and that further discussion of
141 the matter is pointless.
142
143 A "Proprietary GPU effort" (even one that was intended for publication
144 of its API through, for example, a public libre-licensed Vulkan SPIR-V
145 Compiler) would conclude, reasonably and rationally, that, likewise, the
146 opcodes were best suited to be added to RVV, and, further, that their
147 requirements conflict with the HPC world, due to the reduced accuracy.
148 This on the basis that the silicon die area required for IEEE754 is far
149 greater than that needed for reduced-accuracy, and thus their product would
150 be completely unacceptable in the market.
151
152 An "Embedded 3D" GPU has radically different performance, power
153 and die-area requirements (and may even target SoftCores in FPGA).
154 Sharing of the silicon to cover multi-function uses (CORDIC for example)
155 is absolutely essential in order to keep cost and power down, and high
156 performance simply is not. Multi-cycle FSMs instead of pipelines may
157 be considered acceptable, and so on. Subsets of functionality are
158 also essential.
159
160 An "Embedded Numerical" platform has requirements that are separate and
161 distinct from all of the above!
162
163 Mobile Computing needs (tablets, smartphones) again pull in a different
164 direction: high performance, reasonable accuracy, but efficiency is
165 critical. Screen sizes are not at the 4K range: they are within the
166 800x600 range at the low end (320x240 at the extreme budget end), and
167 only the high-performance smartphones and tablets provide 1080p (1920x1080).
168 With lower resolution, accuracy compromises are possible which the Desktop
169 market (4k and soon to be above) would find unacceptable.
170
171 Meeting these disparate markets may be achieved, again, through
172 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
173 to that, subdividing the extension into subsets that best suit the different
174 market areas.
175
176 **Software requirements**:
177
178 A "custom" extension is developed in near-complete isolation from the
179 rest of the RISC-V Community. Cost savings to the Corporation are
180 large, with no direct beneficial feedback to (or impact on) the rest
181 of the RISC-V ecosystem.
182
183 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
184 OpenCL - users have much more influence than first appears. Compliance
185 with these standards is critical as the userbase (Games writers, scientific
186 applications) expects not to have to rewrite large codebases to conform
187 with non-standards-compliant hardware.
188
189 Therefore, compliance with public APIs is paramount, and compliance with
190 Trademarked Standards is critical. Any deviation from Trademarked Standards
191 means that an implementation may not be sold and also make a claim of being,
192 for example, "Vulkan compatible".
193
194 This in turn reinforces and makes a hard requirement a need for public
195 compliance with such standards, over-and-above what would otherwise be
196 set by a RISC-V Standards Development Process, including both the
197 software compliance and the knock-on implications that has for hardware.
198
199 **Collaboration**:
200
201 The case for collaboration on any Extension is already well-known.
202 In this particular case, the precedent for inclusion of Transcendentals
203 in other ISAs, both from Graphics and High-performance Computing, has
204 these primitives well-established in high-profile software libraries and
205 compilers in both GPU and HPC Computer Science divisions. Collaboration
206 and shared public compliance with those standards brooks no argument.
207
208 *Overall this proposal is categorically and wholly unsuited to
209 relegation of "custom" status*.
210
211 # Quantitative Analysis <a name="analysis"></a>
212
213 This is extremely challenging. Normally, an Extension would require full,
214 comprehensive and detailed analysis of every single instruction, for every
215 single possible use-case, in every single market. The amount of silicon
216 area required would be balanced against the benefits of introducing extra
217 opcodes, as well as a full market analysis performed to see which divisions
218 of Computer Science benefit from the introduction of the instruction,
219 in each and every case.
220
221 With 34 instructions, four possible Platforms, and sub-categories of
222 implementations even within each Platform, over 136 separate and distinct
223 analyses is not a practical proposition.
224
225 A little more intelligence has to be applied to the problem space,
226 to reduce it down to manageable levels.
227
228 Fortunately, the subdivision by Platform, in combination with the
229 identification of only two primary markets (Numerical Computation and
230 3D), means that the logical reasoning applies *uniformly* and broadly
231 across *groups* of instructions rather than individually.
232
233 In addition, hardware algorithms such as CORDIC can cover such a wide
234 range of operations (simply by changing the input parameters) that the
235 normal argument of compromising and excluding certain opcodes because they
236 would significantly increase the silicon area is knocked down.
237
238 However, CORDIC, whilst space-efficient, and thus well-suited to
239 Embedded, is an old iterative algorithm not well-suited to High-Performance
240 Computing or Mid to High-end GPUs, where commercially-competitive
241 FP32 pipeline lengths are only around 5 stages.
242
243 Not only that, but some operations such as LOG1P, which would normally
244 be excluded from one market (due to there being an alternative macro-op
245 fused sequence replacing it) are required for other markets due to
246 the higher accuracy obtainable at the lower range of input values when
247 compared to LOG(1+P).
248
249 ATAN and ATAN2 is another example area in which one market's needs
250 conflict directly with another: the only viable solution, without compromising
251 one market to the detriment of the other, is to provide both opcodes
252 and let implementors make the call as to which (or both) to optimise.
253
254 Likewise it is well-known that loops involving "0 to 2 times pi", often
255 done in subdivisions of powers of two, are costly to do because they
256 involve floating-point multiplication by PI in each and every loop.
257 3D GPUs solved this by providing SINPI variants which range from 0 to 1
258 and perform the multiply *inside* the hardware itself. In the case of
259 CORDIC, it turns out that the multiply by PI is not even needed (is a
260 loop invariant magic constant).
261
262 However, some markets may not be able to *use* CORDIC, for reasons
263 mentioned above, and, again, one market would be penalised if SINPI
264 was prioritised over SIN, or vice-versa.
265
266 Thus the best that can be done is to use Quantitative Analysis to work
267 out which "subsets" - sub-Extensions - to include, and be as "inclusive"
268 as possible, and thus allow implementors to decide what to add to their
269 implementation, and how best to optimise them.
270
271 # Proposed Opcodes vs Khronos OpenCL Opcodes <a name="khronos_equiv"></a>
272
273 This list shows the (direct) equivalence between proposed opcodes and
274 their Khronos OpenCL equivalents.
275
276 See
277 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
278
279 Special FP16 opcodes are *not* being proposed, except by indirect / inherent
280 use of the "fmt" field that is already present in the RISC-V Specification.
281
282 "Native" opcodes are *not* being proposed: implementors will be expected
283 to use the (equivalent) proposed opcode covering the same function.
284
285 "Fast" opcodes are *not* being proposed, because the Khronos Specification
286 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
287 vectors (or can be done as scalar operations using other RISC-V instructions).
288
289 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
290 Deviation from conformance with the Khronos Specification - including the
291 Khronos Specification accuracy requirements - is not an option.
292
293 [[!table data="""
294 Proposed opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast |
295 FSIN | sin | half\_sin | native\_sin | NONE |
296 FCOS | cos | half\_cos | native\_cos | NONE |
297 FTAN | tan | half\_tan | native\_tan | NONE |
298 NONE (1) | sincos | NONE | NONE | NONE |
299 FASIN | asin | NONE | NONE | NONE |
300 FACOS | acos | NONE | NONE | NONE |
301 FATAN | atan | NONE | NONE | NONE |
302 FSINPI | sinpi | NONE | NONE | NONE |
303 FCOSPI | cospi | NONE | NONE | NONE |
304 FTANPI | tanpi | NONE | NONE | NONE |
305 FASINPI | asinpi | NONE | NONE | NONE |
306 FACOSPI | acospi | NONE | NONE | NONE |
307 FATANPI | atanpi | NONE | NONE | NONE |
308 FSINH | sinh | NONE | NONE | NONE |
309 FCOSH | cosh | NONE | NONE | NONE |
310 FTANH | tanh | NONE | NONE | NONE |
311 FASINH | asinh | NONE | NONE | NONE |
312 FACOSH | acosh | NONE | NONE | NONE |
313 FATANH | atanh | NONE | NONE | NONE |
314 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE |
315 FCBRT | cbrt | NONE | NONE | NONE |
316 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE |
317 FLOG2 | log2 | half\_log2 | native\_log2 | NONE |
318 FEXPM1 | expm1 | NONE | NONE | NONE |
319 FLOG1P | log1p | NONE | NONE | NONE |
320 FEXP | exp | half\_exp | native\_exp | NONE |
321 FLOG | log | half\_log | native\_log | NONE |
322 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE |
323 FLOG10 | log10 | half\_log10 | native\_log10 | NONE |
324 FATAN2 | atan2 | NONE | NONE | NONE |
325 FATAN2PI | atan2pi | NONE | NONE | NONE |
326 FPOW | pow | NONE | NONE | NONE |
327 FROOT | rootn | NONE | NONE | NONE |
328 FHYPOT | hypot | NONE | NONE | NONE |
329 FRECIP | NONE | half\_recip | native\_recip | NONE |
330 """]]
331
332 Note (1) FSINCOS is macro-op fused (see below).
333
334 # List of 2-arg opcodes
335
336 [[!table data="""
337 opcode | Description | pseudo-code | Extension |
338 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
339 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
340 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
341 FROOT | x power 1/y | rd = pow(rs1, 1/rs2) | ZftransAdv |
342 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | Zftrans |
343 """]]
344
345 # List of 1-arg transcendental opcodes
346
347 [[!table data="""
348 opcode | Description | pseudo-code | Extension |
349 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
350 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | Zftrans |
351 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
352 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
353 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
354 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | Zftrans |
355 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | Zftrans |
356 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
357 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
358 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
359 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
360 """]]
361
362 # List of 1-arg trigonometric opcodes
363
364 [[!table data="""
365 opcode | Description | pseudo-code | Extension |
366 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
367 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
368 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
369 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
370 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
371 FATAN (1) | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
372 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
373 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
374 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
375 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
376 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
377 FATANPI (1) | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
378 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
379 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
380 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
381 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
382 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
383 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
384 """]]
385
386 Note (1): FATAN/FATANPI is a pseudo-op expanding to FATAN2/FATAN2PI (needs deciding)
387
388 # Synthesis, Pseudo-code ops and macro-ops
389
390 The pseudo-ops are best left up to the compiler rather than being actual
391 pseudo-ops, by allocating one scalar FP register for use as a constant
392 (loop invariant) set to "1.0" at the beginning of a function or other
393 suitable code block.
394
395 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
396 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
397
398 FATANPI example pseudo-code:
399
400 lui t0, 0x3F800 // upper bits of f32 1.0
401 fmv.x.s ft0, t0
402 fatan2pi.s rd, rs1, ft0
403
404 Hyperbolic function example (obviates need for Zfhyp except for
405 high-performance or correctly-rounding):
406
407 ASINH( x ) = ln( x + SQRT(x**2+1))
408
409 # Reciprocal
410
411 Used to be an alias. Some imolementors may wish to implement divide as y times recip(x)
412
413 # To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
414
415 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
416 Research needed to ensure that implementors are not compromised by such
417 a decision
418 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
419
420 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
421 > > Likewise for EXP and EXPM1
422
423 > ok, they stay in as real opcodes, then.
424
425 # ATAN / ATAN2 commentary
426
427 Discussion starts here:
428 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
429
430 from Mitch Alsup:
431
432 would like to point out that the general implementations of ATAN2 do a
433 bunch of special case checks and then simply call ATAN.
434
435 double ATAN2( double y, double x )
436 { // IEEE 754-2008 quality ATAN2
437
438 // deal with NANs
439 if( ISNAN( x ) ) return x;
440 if( ISNAN( y ) ) return y;
441
442 // deal with infinities
443 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
444 if( x == +∞ ) return copysign( 0.0, y );
445 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
446 if( x == -∞ ) return copysign( π, y );
447 if( |y|== +∞ ) return copysign( π/2, y );
448
449 // deal with signed zeros
450 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
451 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
452 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
453
454 // calculate ATAN2 textbook style
455 if( x > 0.0 ) return ATAN( |y / x| );
456 if( x < 0.0 ) return π - ATAN( |y / x| );
457 }
458
459
460 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
461 a constant and then call/use ATAN2.
462
463 When one considers an implementation of ATAN, one must consider several
464 ranges of evaluation::
465
466 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
467 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
468 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
469
470 I should point out that the add/sub of π/2 can not lose significance
471 since the result of ATAN(1/x) is bounded 0..π/2
472
473 The bottom line is that I think you are choosing to make too many of
474 these into OpCodes, making the hardware function/calculation unit (and
475 sequencer) more complicated that necessary.
476
477 --------------------------------------------------------
478
479 We therefore I think have a case for bringing back ATAN and including ATAN2.
480
481 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
482
483 (that is the hypothesis, to be evaluated for correctness. feedback requested).
484
485 Thie because we cannot compromise or prioritise one platfrom's speed/accuracy over another. That is not reasonable or desirable, to penalise one implementor over another.
486
487 Thus, all implementors, to keep interoperability, must both have both opcodes and may choose, at the architectural and routing level, which one to implement in terms of the other.
488
489 Allowing implementors to choose to add either opcode and let traps sort it out leaves an uncertainty in the software developer's mind: they cannot trust the hardware, available from many vendors, to be performant right across the board.
490
491 Standards are a pig.
492
493 ---
494
495 I might suggest that if there were a way for a calculation to be performed
496 and the result of that calculation chained to a subsequent calculation
497 such that the precision of the result-becomes-operand is wider than
498 what will fit in a register, then you can dramatically reduce the count
499 of instructions in this category while retaining
500
501 acceptable accuracy:
502
503 z = x / y
504
505 can be calculated as::
506
507 z = x * (1/y)
508
509 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
510 accurate, but GPUs want speed and
511
512 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
513 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
514
515 Given that one has the ability to carry (and process) more fraction bits,
516 one can then do high precision multiplies of π or other transcendental
517 radixes.
518
519 And GPUs have been doing this almost since the dawn of 3D.
520
521 // calculate ATAN2 high performance style
522 // Note: at this point x != y
523 //
524 if( x > 0.0 )
525 {
526 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
527 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
528 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
529 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
530 }
531 if( x < 0.0 )
532 {
533 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
534 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
535 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
536 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
537 }
538
539 This way the adds and subtracts from the constant are not in a precision
540 precarious position.