add explanatory para
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 With thanks to:
4
5 * Jacob Lifshay
6 * Dan Petroski
7 * Mitch Alsup
8 * Allen Baum
9 * Andrew Waterman
10 * Luis Vitorio Cargnini
11
12 See:
13
14 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
15 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
16 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
17 * [[rv_major_opcode_1010011]] for opcode listing.
18 * [[zfpacc_proposal]] for accuracy settings proposal
19
20 Extension subsets:
21
22 * **Zftrans**: standard transcendentals (best suited to 3D)
23 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
24 can be synthesised using Ztrans)
25 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
26 * **Ztrignpi**: trig non-xxx-pi sin cos tan
27 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
28 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
29 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
30 acosh, atanh (can be synthesised - see below)
31 * **ZftransAdv**: much more complex to implement in hardware
32 * **Zfrsqrt**: Reciprocal square-root.
33
34 Minimum recommended requirements for 3D: Zftrans, Ztrigpi, Zarctrigpi,
35 Zarctrignpi
36
37 [[!toc levels=2]]
38
39 # TODO:
40
41 * Decision on accuracy, moved to [[zfpacc_proposal]]
42 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
43 * Errors **MUST** be repeatable.
44 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
45 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
46 Accuracy requirements for dual (triple) purpose implementations must
47 meet the higher standard.
48 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
49 it is desirable on its own by other implementors. This to be evaluated.
50
51 # Requirements <a name="requirements"></a>
52
53 This proposal is designed to meet a wide range of extremely diverse needs,
54 allowing implementors from all of them to benefit from the tools and hardware
55 cost reductions associated with common standards adoption.
56
57 **There are *four* different, disparate platform's needs (two new)**:
58
59 * 3D Embedded Platform
60 * Embedded Platform
61 * 3D UNIX Platform
62 * UNIX Platform
63
64 **The use-cases are**:
65
66 * 3D GPUs
67 * Numerical Computation
68 * (Potentially) A.I. / Machine-learning (1)
69
70 (1) although approximations suffice in this field, making it more likely
71 to use a custom extension. High-end ML would inherently definitely
72 be excluded.
73
74 **The power and die-area requirements vary from**:
75
76 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
77 * Mobile-Embedded (good performance with high efficiency for battery life)
78 * Desktop Computing
79 * Server / HPC (2)
80
81 (2) Supercomputing is left out of the requirements as it is traditionally
82 covered by Supercomputer Vectorisation Standards (such as RVV).
83
84 **The software requirements are**:
85
86 * Full public integration into GNU math libraries (libm)
87 * Full public integration into well-known Numerical Computation systems (numpy)
88 * Full public integration into upstream GNU and LLVM Compiler toolchains
89 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
90 seeking public Certification and Endorsement from the Khronos Group
91 under their Trademarked Certification Programme.
92
93 **The "contra"-requirements are**:
94
95 * The requirements are **not** for the purposes of developing a full custom
96 proprietary GPU with proprietary firmware.
97 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
98 this proposal however the fact that they typically develop proprietary
99 software that is not shared with the rest of the community likely to
100 use this proposal means that they have completely different needs.
101 * This proposal is for *sharing* of effort in reducing development costs
102
103 # Requirements Analysis <a name="requirements_analysis"></a>
104
105 **Platforms**:
106
107 3D Embedded will require significantly less accuracy and will need to make
108 power budget and die area compromises that other platforms (including Embedded)
109 will not need to make.
110
111 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
112 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
113 IEEE754 compliance is a hard requirement that would compromise power
114 and efficiency on a 3D UNIX Platform.
115
116 Even in the Embedded platform, IEEE754 interoperability is beneficial,
117 where if it was a hard requirement the 3D Embedded platform would be severely
118 compromised in its ability to meet the demanding power budgets of that market.
119
120 Thus, learning from the lessons of
121 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
122 this proposal works in conjunction with the [[zfpacc_proposal]], so as
123 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
124
125 **Use-cases**:
126
127 There really is little else in the way of suitable markets. 3D GPUs
128 have extremely competitive power-efficiency and power-budget requirements
129 that are completely at odds with the other market at the other end of
130 the spectrum: Numerical Computation.
131
132 Interoperability in Numerical Computation is absolutely critical: it implies
133 IEEE754 compliance. However full IEEE754 compliance automatically and
134 inherently penalises a GPU, where accuracy is simply just not necessary.
135
136 To meet the needs of both markets, the two new platforms have to be created,
137 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
138 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
139 compliance *and* 3D performance in a single ASIC.
140
141 **Power and die-area requirements**:
142
143 This is where the conflicts really start to hit home.
144
145 A "Numerical High performance only" proposal (suitable for Server / HPC
146 only) would customise and target the Extension based on a quantitative
147 analysis of the value of certain opcodes *for HPC only*. It would
148 conclude, reasonably and rationally, that it is worthwhile adding opcodes
149 to RVV as parallel Vector operations, and that further discussion of
150 the matter is pointless.
151
152 A "Proprietary GPU effort" (even one that was intended for publication
153 of its API through, for example, a public libre-licensed Vulkan SPIR-V
154 Compiler) would conclude, reasonably and rationally, that, likewise, the
155 opcodes were best suited to be added to RVV, and, further, that their
156 requirements conflict with the HPC world, due to the reduced accuracy.
157 This on the basis that the silicon die area required for IEEE754 is far
158 greater than that needed for reduced-accuracy, and thus their product would
159 be completely unacceptable in the market.
160
161 An "Embedded 3D" GPU has radically different performance, power
162 and die-area requirements (and may even target SoftCores in FPGA).
163 Sharing of the silicon to cover multi-function uses (CORDIC for example)
164 is absolutely essential in order to keep cost and power down, and high
165 performance simply is not. Multi-cycle FSMs instead of pipelines may
166 be considered acceptable, and so on. Subsets of functionality are
167 also essential.
168
169 An "Embedded Numerical" platform has requirements that are separate and
170 distinct from all of the above!
171
172 Mobile Computing needs (tablets, smartphones) again pull in a different
173 direction: high performance, reasonable accuracy, but efficiency is
174 critical. Screen sizes are not at the 4K range: they are within the
175 800x600 range at the low end (320x240 at the extreme budget end), and
176 only the high-performance smartphones and tablets provide 1080p (1920x1080).
177 With lower resolution, accuracy compromises are possible which the Desktop
178 market (4k and soon to be above) would find unacceptable.
179
180 Meeting these disparate markets may be achieved, again, through
181 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
182 to that, subdividing the extension into subsets that best suit the different
183 market areas.
184
185 **Software requirements**:
186
187 A "custom" extension is developed in near-complete isolation from the
188 rest of the RISC-V Community. Cost savings to the Corporation are
189 large, with no direct beneficial feedback to (or impact on) the rest
190 of the RISC-V ecosystem.
191
192 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
193 OpenCL - users have much more influence than first appears. Compliance
194 with these standards is critical as the userbase (Games writers, scientific
195 applications) expects not to have to rewrite large codebases to conform
196 with non-standards-compliant hardware.
197
198 Therefore, compliance with public APIs is paramount, and compliance with
199 Trademarked Standards is critical. Any deviation from Trademarked Standards
200 means that an implementation may not be sold and also make a claim of being,
201 for example, "Vulkan compatible".
202
203 This in turn reinforces and makes a hard requirement a need for public
204 compliance with such standards, over-and-above what would otherwise be
205 set by a RISC-V Standards Development Process, including both the
206 software compliance and the knock-on implications that has for hardware.
207
208 **Collaboration**:
209
210 The case for collaboration on any Extension is already well-known.
211 In this particular case, the precedent for inclusion of Transcendentals
212 in other ISAs, both from Graphics and High-performance Computing, has
213 these primitives well-established in high-profile software libraries and
214 compilers in both GPU and HPC Computer Science divisions. Collaboration
215 and shared public compliance with those standards brooks no argument.
216
217 *Overall this proposal is categorically and wholly unsuited to
218 relegation of "custom" status*.
219
220 # Quantitative Analysis <a name="analysis"></a>
221
222 This is extremely challenging. Normally, an Extension would require full,
223 comprehensive and detailed analysis of every single instruction, for every
224 single possible use-case, in every single market. The amount of silicon
225 area required would be balanced against the benefits of introducing extra
226 opcodes, as well as a full market analysis performed to see which divisions
227 of Computer Science benefit from the introduction of the instruction,
228 in each and every case.
229
230 With 34 instructions, four possible Platforms, and sub-categories of
231 implementations even within each Platform, over 136 separate and distinct
232 analyses is not a practical proposition.
233
234 A little more intelligence has to be applied to the problem space,
235 to reduce it down to manageable levels.
236
237 Fortunately, the subdivision by Platform, in combination with the
238 identification of only two primary markets (Numerical Computation and
239 3D), means that the logical reasoning applies *uniformly* and broadly
240 across *groups* of instructions rather than individually.
241
242 In addition, hardware algorithms such as CORDIC can cover such a wide
243 range of operations (simply by changing the input parameters) that the
244 normal argument of compromising and excluding certain opcodes because they
245 would significantly increase the silicon area is knocked down.
246
247 However, CORDIC, whilst space-efficient, and thus well-suited to
248 Embedded, is an old iterative algorithm not well-suited to High-Performance
249 Computing or Mid to High-end GPUs, where commercially-competitive
250 FP32 pipeline lengths are only around 5 stages.
251
252 Not only that, but some operations such as LOG1P, which would normally
253 be excluded from one market (due to there being an alternative macro-op
254 fused sequence replacing it) are required for other markets due to
255 the higher accuracy obtainable at the lower range of input values when
256 compared to LOG(1+P).
257
258 (Thus we start to see why "proprietary" markets are excluded from this
259 proposal, because "proprietary" markets would make *hardware*-driven
260 optimisation decisions that would be completely inappropriate for a
261 common standard).
262
263 ATAN and ATAN2 is another example area in which one market's needs
264 conflict directly with another: the only viable solution, without compromising
265 one market to the detriment of the other, is to provide both opcodes
266 and let implementors make the call as to which (or both) to optimise,
267 at the *hardware* level.
268
269 Likewise it is well-known that loops involving "0 to 2 times pi", often
270 done in subdivisions of powers of two, are costly to do because they
271 involve floating-point multiplication by PI in each and every loop.
272 3D GPUs solved this by providing SINPI variants which range from 0 to 1
273 and perform the multiply *inside* the hardware itself. In the case of
274 CORDIC, it turns out that the multiply by PI is not even needed (is a
275 loop invariant magic constant).
276
277 However, some markets may not wish to *use* CORDIC, for reasons mentioned
278 above, and, again, one market would be penalised if SINPI was prioritised
279 over SIN, or vice-versa.
280
281 Thus the best that can be done is to use Quantitative Analysis to work
282 out which "subsets" - sub-Extensions - to include, and be as "inclusive"
283 as possible, and thus allow implementors to decide what to add to their
284 implementation, and how best to optimise them.
285
286 This approach *only* works due to the uniformity of the function space,
287 and is **not** an appropriate methodology for use in other Extensions
288 with diverse markets and large numbers of potential opcodes.
289 BitManip is the perfect counter-example.
290
291 # Proposed Opcodes vs Khronos OpenCL Opcodes <a name="khronos_equiv"></a>
292
293 This list shows the (direct) equivalence between proposed opcodes and
294 their Khronos OpenCL equivalents.
295
296 See
297 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
298
299 Special FP16 opcodes are *not* being proposed, except by indirect / inherent
300 use of the "fmt" field that is already present in the RISC-V Specification.
301
302 "Native" opcodes are *not* being proposed: implementors will be expected
303 to use the (equivalent) proposed opcode covering the same function.
304
305 "Fast" opcodes are *not* being proposed, because the Khronos Specification
306 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
307 vectors (or can be done as scalar operations using other RISC-V instructions).
308
309 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
310 Deviation from conformance with the Khronos Specification - including the
311 Khronos Specification accuracy requirements - is not an option.
312
313 [[!table data="""
314 Proposed opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast |
315 FSIN | sin | half\_sin | native\_sin | NONE |
316 FCOS | cos | half\_cos | native\_cos | NONE |
317 FTAN | tan | half\_tan | native\_tan | NONE |
318 NONE (1) | sincos | NONE | NONE | NONE |
319 FASIN | asin | NONE | NONE | NONE |
320 FACOS | acos | NONE | NONE | NONE |
321 FATAN | atan | NONE | NONE | NONE |
322 FSINPI | sinpi | NONE | NONE | NONE |
323 FCOSPI | cospi | NONE | NONE | NONE |
324 FTANPI | tanpi | NONE | NONE | NONE |
325 FASINPI | asinpi | NONE | NONE | NONE |
326 FACOSPI | acospi | NONE | NONE | NONE |
327 FATANPI | atanpi | NONE | NONE | NONE |
328 FSINH | sinh | NONE | NONE | NONE |
329 FCOSH | cosh | NONE | NONE | NONE |
330 FTANH | tanh | NONE | NONE | NONE |
331 FASINH | asinh | NONE | NONE | NONE |
332 FACOSH | acosh | NONE | NONE | NONE |
333 FATANH | atanh | NONE | NONE | NONE |
334 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE |
335 FCBRT | cbrt | NONE | NONE | NONE |
336 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE |
337 FLOG2 | log2 | half\_log2 | native\_log2 | NONE |
338 FEXPM1 | expm1 | NONE | NONE | NONE |
339 FLOG1P | log1p | NONE | NONE | NONE |
340 FEXP | exp | half\_exp | native\_exp | NONE |
341 FLOG | log | half\_log | native\_log | NONE |
342 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE |
343 FLOG10 | log10 | half\_log10 | native\_log10 | NONE |
344 FATAN2 | atan2 | NONE | NONE | NONE |
345 FATAN2PI | atan2pi | NONE | NONE | NONE |
346 FPOW | pow | NONE | NONE | NONE |
347 FROOT | rootn | NONE | NONE | NONE |
348 FHYPOT | hypot | NONE | NONE | NONE |
349 FRECIP | NONE | half\_recip | native\_recip | NONE |
350 """]]
351
352 Note (1) FSINCOS is macro-op fused (see below).
353
354 # List of 2-arg opcodes
355
356 [[!table data="""
357 opcode | Description | pseudo-code | Extension |
358 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
359 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
360 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
361 FROOT | x power 1/y | rd = pow(rs1, 1/rs2) | ZftransAdv |
362 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | Zftrans |
363 """]]
364
365 # List of 1-arg transcendental opcodes
366
367 [[!table data="""
368 opcode | Description | pseudo-code | Extension |
369 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
370 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | Zftrans |
371 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
372 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
373 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
374 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | Zftrans |
375 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | Zftrans |
376 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
377 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
378 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
379 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
380 """]]
381
382 # List of 1-arg trigonometric opcodes
383
384 [[!table data="""
385 opcode | Description | pseudo-code | Extension |
386 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
387 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
388 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
389 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
390 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
391 FATAN (1) | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
392 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
393 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
394 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
395 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
396 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
397 FATANPI (1) | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
398 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
399 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
400 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
401 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
402 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
403 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
404 """]]
405
406 Note (1): FATAN/FATANPI is a pseudo-op expanding to FATAN2/FATAN2PI (needs deciding)
407
408 # Synthesis, Pseudo-code ops and macro-ops
409
410 The pseudo-ops are best left up to the compiler rather than being actual
411 pseudo-ops, by allocating one scalar FP register for use as a constant
412 (loop invariant) set to "1.0" at the beginning of a function or other
413 suitable code block.
414
415 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
416 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
417
418 FATANPI example pseudo-code:
419
420 lui t0, 0x3F800 // upper bits of f32 1.0
421 fmv.x.s ft0, t0
422 fatan2pi.s rd, rs1, ft0
423
424 Hyperbolic function example (obviates need for Zfhyp except for
425 high-performance or correctly-rounding):
426
427 ASINH( x ) = ln( x + SQRT(x**2+1))
428
429 # Reciprocal
430
431 Used to be an alias. Some imolementors may wish to implement divide as y times recip(x)
432
433 # To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
434
435 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
436 Research needed to ensure that implementors are not compromised by such
437 a decision
438 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
439
440 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
441 > > Likewise for EXP and EXPM1
442
443 > ok, they stay in as real opcodes, then.
444
445 # ATAN / ATAN2 commentary
446
447 Discussion starts here:
448 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
449
450 from Mitch Alsup:
451
452 would like to point out that the general implementations of ATAN2 do a
453 bunch of special case checks and then simply call ATAN.
454
455 double ATAN2( double y, double x )
456 { // IEEE 754-2008 quality ATAN2
457
458 // deal with NANs
459 if( ISNAN( x ) ) return x;
460 if( ISNAN( y ) ) return y;
461
462 // deal with infinities
463 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
464 if( x == +∞ ) return copysign( 0.0, y );
465 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
466 if( x == -∞ ) return copysign( π, y );
467 if( |y|== +∞ ) return copysign( π/2, y );
468
469 // deal with signed zeros
470 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
471 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
472 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
473
474 // calculate ATAN2 textbook style
475 if( x > 0.0 ) return ATAN( |y / x| );
476 if( x < 0.0 ) return π - ATAN( |y / x| );
477 }
478
479
480 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
481 a constant and then call/use ATAN2.
482
483 When one considers an implementation of ATAN, one must consider several
484 ranges of evaluation::
485
486 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
487 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
488 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
489
490 I should point out that the add/sub of π/2 can not lose significance
491 since the result of ATAN(1/x) is bounded 0..π/2
492
493 The bottom line is that I think you are choosing to make too many of
494 these into OpCodes, making the hardware function/calculation unit (and
495 sequencer) more complicated that necessary.
496
497 --------------------------------------------------------
498
499 We therefore I think have a case for bringing back ATAN and including ATAN2.
500
501 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
502
503 (that is the hypothesis, to be evaluated for correctness. feedback requested).
504
505 Thie because we cannot compromise or prioritise one platfrom's speed/accuracy over another. That is not reasonable or desirable, to penalise one implementor over another.
506
507 Thus, all implementors, to keep interoperability, must both have both opcodes and may choose, at the architectural and routing level, which one to implement in terms of the other.
508
509 Allowing implementors to choose to add either opcode and let traps sort it out leaves an uncertainty in the software developer's mind: they cannot trust the hardware, available from many vendors, to be performant right across the board.
510
511 Standards are a pig.
512
513 ---
514
515 I might suggest that if there were a way for a calculation to be performed
516 and the result of that calculation chained to a subsequent calculation
517 such that the precision of the result-becomes-operand is wider than
518 what will fit in a register, then you can dramatically reduce the count
519 of instructions in this category while retaining
520
521 acceptable accuracy:
522
523 z = x / y
524
525 can be calculated as::
526
527 z = x * (1/y)
528
529 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
530 accurate, but GPUs want speed and
531
532 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
533 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
534
535 Given that one has the ability to carry (and process) more fraction bits,
536 one can then do high precision multiplies of π or other transcendental
537 radixes.
538
539 And GPUs have been doing this almost since the dawn of 3D.
540
541 // calculate ATAN2 high performance style
542 // Note: at this point x != y
543 //
544 if( x > 0.0 )
545 {
546 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
547 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
548 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
549 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
550 }
551 if( x < 0.0 )
552 {
553 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
554 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
555 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
556 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
557 }
558
559 This way the adds and subtracts from the constant are not in a precision
560 precarious position.