(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 With thanks to:
4
5 * Jacob Lifshay
6 * Dan Petroski
7 * Mitch Alsup
8 * Allen Baum
9 * Andrew Waterman
10 * Luis Vitorio Cargnini
11
12 [[!toc levels=2]]
13
14 See:
15
16 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
17 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
18 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
19 * [[rv_major_opcode_1010011]] for opcode listing.
20 * [[zfpacc_proposal]] for accuracy settings proposal
21
22 Extension subsets:
23
24 * **Zftrans**: standard transcendentals (best suited to 3D)
25 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
26 can be synthesised using Ztrans)
27 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
28 * **Ztrignpi**: trig non-xxx-pi sin cos tan
29 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
30 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
31 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
32 acosh, atanh (can be synthesised - see below)
33 * **ZftransAdv**: much more complex to implement in hardware
34 * **Zfrsqrt**: Reciprocal square-root.
35
36 Minimum recommended requirements for 3D: Zftrans, Ztrigpi, Zarctrigpi,
37 Zarctrignpi
38
39 # TODO:
40
41 * Decision on accuracy, moved to [[zfpacc_proposal]]
42 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
43 * Errors **MUST** be repeatable.
44 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
45 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
46 Accuracy requirements for dual (triple) purpose implementations must
47 meet the higher standard.
48 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
49 it is desirable on its own by other implementors. This to be evaluated.
50
51 # Requirements <a name="requirements"></a>
52
53 This proposal is designed to meet a wide range of extremely diverse needs,
54 allowing implementors from all of them to benefit from the tools and hardware
55 cost reductions associated with common standards adoption.
56
57 **There are *four* different, disparate platform's needs (two new)**:
58
59 * 3D Embedded Platform
60 * Embedded Platform
61 * 3D UNIX Platform
62 * UNIX Platform
63
64 **The use-cases are**:
65
66 * 3D GPUs
67 * Numerical Computation
68 * (Potentially) A.I. / Machine-learning (1)
69
70 (1) although approximations suffice in this field, making it more likely
71 to use a custom extension. High-end ML would inherently definitely
72 be excluded.
73
74 **The power and die-area requirements vary from**:
75
76 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
77 * Mobile-Embedded (good performance with high efficiency for battery life)
78 * Desktop Computing
79 * Server / HPC (2)
80
81 (2) Supercomputing is left out of the requirements as it is traditionally
82 covered by Supercomputer Vectorisation Standards (such as RVV).
83
84 **The software requirements are**:
85
86 * Full public integration into GNU math libraries (libm)
87 * Full public integration into well-known Numerical Computation systems (numpy)
88 * Full public integration into upstream GNU and LLVM Compiler toolchains
89 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
90 seeking public Certification and Endorsement from the Khronos Group
91 under their Trademarked Certification Programme.
92
93 **The "contra"-requirements are**:
94
95 * The requirements are **not** for the purposes of developing a full custom
96 proprietary GPU with proprietary firmware
97 driven by *hardware* centric optimised design decisions as a priority over collaboration.
98 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
99 this proposal however the fact that they typically develop proprietary
100 software that is not shared with the rest of the community likely to
101 use this proposal means that they have completely different needs.
102 * This proposal is for *sharing* of effort in reducing development costs
103
104 # Requirements Analysis <a name="requirements_analysis"></a>
105
106 **Platforms**:
107
108 3D Embedded will require significantly less accuracy and will need to make
109 power budget and die area compromises that other platforms (including Embedded)
110 will not need to make.
111
112 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
113 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
114 IEEE754 compliance is a hard requirement that would compromise power
115 and efficiency on a 3D UNIX Platform.
116
117 Even in the Embedded platform, IEEE754 interoperability is beneficial,
118 where if it was a hard requirement the 3D Embedded platform would be severely
119 compromised in its ability to meet the demanding power budgets of that market.
120
121 Thus, learning from the lessons of
122 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
123 this proposal works in conjunction with the [[zfpacc_proposal]], so as
124 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
125
126 **Use-cases**:
127
128 There really is little else in the way of suitable markets. 3D GPUs
129 have extremely competitive power-efficiency and power-budget requirements
130 that are completely at odds with the other market at the other end of
131 the spectrum: Numerical Computation.
132
133 Interoperability in Numerical Computation is absolutely critical: it implies
134 IEEE754 compliance. However full IEEE754 compliance automatically and
135 inherently penalises a GPU, where accuracy is simply just not necessary.
136
137 To meet the needs of both markets, the two new platforms have to be created,
138 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
139 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
140 compliance *and* 3D performance in a single ASIC.
141
142 **Power and die-area requirements**:
143
144 This is where the conflicts really start to hit home.
145
146 A "Numerical High performance only" proposal (suitable for Server / HPC
147 only) would customise and target the Extension based on a quantitative
148 analysis of the value of certain opcodes *for HPC only*. It would
149 conclude, reasonably and rationally, that it is worthwhile adding opcodes
150 to RVV as parallel Vector operations, and that further discussion of
151 the matter is pointless.
152
153 A "Proprietary GPU effort" (even one that was intended for publication
154 of its API through, for example, a public libre-licensed Vulkan SPIR-V
155 Compiler) would conclude, reasonably and rationally, that, likewise, the
156 opcodes were best suited to be added to RVV, and, further, that their
157 requirements conflict with the HPC world, due to the reduced accuracy.
158 This on the basis that the silicon die area required for IEEE754 is far
159 greater than that needed for reduced-accuracy, and thus their product would
160 be completely unacceptable in the market.
161
162 An "Embedded 3D" GPU has radically different performance, power
163 and die-area requirements (and may even target SoftCores in FPGA).
164 Sharing of the silicon to cover multi-function uses (CORDIC for example)
165 is absolutely essential in order to keep cost and power down, and high
166 performance simply is not. Multi-cycle FSMs instead of pipelines may
167 be considered acceptable, and so on. Subsets of functionality are
168 also essential.
169
170 An "Embedded Numerical" platform has requirements that are separate and
171 distinct from all of the above!
172
173 Mobile Computing needs (tablets, smartphones) again pull in a different
174 direction: high performance, reasonable accuracy, but efficiency is
175 critical. Screen sizes are not at the 4K range: they are within the
176 800x600 range at the low end (320x240 at the extreme budget end), and
177 only the high-performance smartphones and tablets provide 1080p (1920x1080).
178 With lower resolution, accuracy compromises are possible which the Desktop
179 market (4k and soon to be above) would find unacceptable.
180
181 Meeting these disparate markets may be achieved, again, through
182 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
183 to that, subdividing the extension into subsets that best suit the different
184 market areas.
185
186 **Software requirements**:
187
188 A "custom" extension is developed in near-complete isolation from the
189 rest of the RISC-V Community. Cost savings to the Corporation are
190 large, with no direct beneficial feedback to (or impact on) the rest
191 of the RISC-V ecosystem.
192
193 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
194 OpenCL - users have much more influence than first appears. Compliance
195 with these standards is critical as the userbase (Games writers, scientific
196 applications) expects not to have to rewrite large codebases to conform
197 with non-standards-compliant hardware.
198
199 Therefore, compliance with public APIs is paramount, and compliance with
200 Trademarked Standards is critical. Any deviation from Trademarked Standards
201 means that an implementation may not be sold and also make a claim of being,
202 for example, "Vulkan compatible".
203
204 This in turn reinforces and makes a hard requirement a need for public
205 compliance with such standards, over-and-above what would otherwise be
206 set by a RISC-V Standards Development Process, including both the
207 software compliance and the knock-on implications that has for hardware.
208
209 **Collaboration**:
210
211 The case for collaboration on any Extension is already well-known.
212 In this particular case, the precedent for inclusion of Transcendentals
213 in other ISAs, both from Graphics and High-performance Computing, has
214 these primitives well-established in high-profile software libraries and
215 compilers in both GPU and HPC Computer Science divisions. Collaboration
216 and shared public compliance with those standards brooks no argument.
217
218 *Overall this proposal is categorically and wholly unsuited to
219 relegation of "custom" status*.
220
221 # Quantitative Analysis <a name="analysis"></a>
222
223 This is extremely challenging. Normally, an Extension would require full,
224 comprehensive and detailed analysis of every single instruction, for every
225 single possible use-case, in every single market. The amount of silicon
226 area required would be balanced against the benefits of introducing extra
227 opcodes, as well as a full market analysis performed to see which divisions
228 of Computer Science benefit from the introduction of the instruction,
229 in each and every case.
230
231 With 34 instructions, four possible Platforms, and sub-categories of
232 implementations even within each Platform, over 136 separate and distinct
233 analyses is not a practical proposition.
234
235 A little more intelligence has to be applied to the problem space,
236 to reduce it down to manageable levels.
237
238 Fortunately, the subdivision by Platform, in combination with the
239 identification of only two primary markets (Numerical Computation and
240 3D), means that the logical reasoning applies *uniformly* and broadly
241 across *groups* of instructions rather than individually, making it a primarily
242 hardware-centric and accuracy-centric decision-making process.
243
244 In addition, hardware algorithms such as CORDIC can cover such a wide
245 range of operations (simply by changing the input parameters) that the
246 normal argument of compromising and excluding certain opcodes because they
247 would significantly increase the silicon area is knocked down.
248
249 However, CORDIC, whilst space-efficient, and thus well-suited to
250 Embedded, is an old iterative algorithm not well-suited to High-Performance
251 Computing or Mid to High-end GPUs, where commercially-competitive
252 FP32 pipeline lengths are only around 5 stages.
253
254 Not only that, but some operations such as LOG1P, which would normally
255 be excluded from one market (due to there being an alternative macro-op
256 fused sequence replacing it) are required for other markets due to
257 the higher accuracy obtainable at the lower range of input values when
258 compared to LOG(1+P).
259
260 (Thus we start to see why "proprietary" markets are excluded from this
261 proposal, because "proprietary" markets would make *hardware*-driven
262 optimisation decisions that would be completely inappropriate for a
263 common standard).
264
265 ATAN and ATAN2 is another example area in which one market's needs
266 conflict directly with another: the only viable solution, without compromising
267 one market to the detriment of the other, is to provide both opcodes
268 and let implementors make the call as to which (or both) to optimise,
269 at the *hardware* level.
270
271 Likewise it is well-known that loops involving "0 to 2 times pi", often
272 done in subdivisions of powers of two, are costly to do because they
273 involve floating-point multiplication by PI in each and every loop.
274 3D GPUs solved this by providing SINPI variants which range from 0 to 1
275 and perform the multiply *inside* the hardware itself. In the case of
276 CORDIC, it turns out that the multiply by PI is not even needed (is a
277 loop invariant magic constant).
278
279 However, some markets may not wish to *use* CORDIC, for reasons mentioned
280 above, and, again, one market would be penalised if SINPI was prioritised
281 over SIN, or vice-versa.
282
283 In essence, then, even when only the two primary markets (3D and Numerical Computation) have been identified, this still leaves two (three) diametrically-opposed *accuracy* sub-markets as the prime conflict drivers:
284
285 * Embedded Ultra Low Power
286 * IEEE754 compliance
287 * Khronos Vulkan compliance
288
289 Thus the best that can be done is to use Quantitative Analysis to work
290 out which "subsets" - sub-Extensions - to include, provide an additional "accuracy" extension, be as "inclusive"
291 as possible, and thus allow implementors to decide what to add to their
292 implementation, and how best to optimise them.
293
294 This approach *only* works due to the uniformity of the function space,
295 and is **not** an appropriate methodology for use in other Extensions
296 with huge (non-uniform) market diversity even with similarly large numbers of potential opcodes.
297 BitManip is the perfect counter-example.
298
299 # Proposed Opcodes vs Khronos OpenCL Opcodes <a name="khronos_equiv"></a>
300
301 This list shows the (direct) equivalence between proposed opcodes and
302 their Khronos OpenCL equivalents.
303
304 See
305 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
306
307 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
308 use of the "fmt" field that is already present in the RISC-V Specification.
309 * "Native" opcodes are *not* being proposed: implementors will be expected
310 to use the (equivalent) proposed opcode covering the same function.
311 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
312 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
313 vectors (or can be done as scalar operations using other RISC-V instructions).
314
315 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
316 Deviation from conformance with the Khronos Specification - including the
317 Khronos Specification accuracy requirements - is not an option, as it
318 results in non-compliance, and the vendor may not use the Trademarked words
319 "Vulkan" etc. in conjunction with their product.
320
321 [[!table data="""
322 Proposed opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast |
323 FSIN | sin | half\_sin | native\_sin | NONE |
324 FCOS | cos | half\_cos | native\_cos | NONE |
325 FTAN | tan | half\_tan | native\_tan | NONE |
326 NONE (1) | sincos | NONE | NONE | NONE |
327 FASIN | asin | NONE | NONE | NONE |
328 FACOS | acos | NONE | NONE | NONE |
329 FATAN | atan | NONE | NONE | NONE |
330 FSINPI | sinpi | NONE | NONE | NONE |
331 FCOSPI | cospi | NONE | NONE | NONE |
332 FTANPI | tanpi | NONE | NONE | NONE |
333 FASINPI | asinpi | NONE | NONE | NONE |
334 FACOSPI | acospi | NONE | NONE | NONE |
335 FATANPI | atanpi | NONE | NONE | NONE |
336 FSINH | sinh | NONE | NONE | NONE |
337 FCOSH | cosh | NONE | NONE | NONE |
338 FTANH | tanh | NONE | NONE | NONE |
339 FASINH | asinh | NONE | NONE | NONE |
340 FACOSH | acosh | NONE | NONE | NONE |
341 FATANH | atanh | NONE | NONE | NONE |
342 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE |
343 FCBRT | cbrt | NONE | NONE | NONE |
344 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE |
345 FLOG2 | log2 | half\_log2 | native\_log2 | NONE |
346 FEXPM1 | expm1 | NONE | NONE | NONE |
347 FLOG1P | log1p | NONE | NONE | NONE |
348 FEXP | exp | half\_exp | native\_exp | NONE |
349 FLOG | log | half\_log | native\_log | NONE |
350 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE |
351 FLOG10 | log10 | half\_log10 | native\_log10 | NONE |
352 FATAN2 | atan2 | NONE | NONE | NONE |
353 FATAN2PI | atan2pi | NONE | NONE | NONE |
354 FPOW | pow | NONE | NONE | NONE |
355 FROOT | rootn | NONE | NONE | NONE |
356 FHYPOT | hypot | NONE | NONE | NONE |
357 FRECIP | NONE | half\_recip | native\_recip | NONE |
358 """]]
359
360 Note (1) FSINCOS is macro-op fused (see below).
361
362 # List of 2-arg opcodes
363
364 [[!table data="""
365 opcode | Description | pseudo-code | Extension |
366 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
367 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
368 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
369 FROOT | x power 1/y | rd = pow(rs1, 1/rs2) | ZftransAdv |
370 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
371 """]]
372
373 # List of 1-arg transcendental opcodes
374
375 [[!table data="""
376 opcode | Description | pseudo-code | Extension |
377 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
378 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
379 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
380 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
381 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
382 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | Zftrans |
383 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | Zftrans |
384 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
385 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
386 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
387 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
388 """]]
389
390 # List of 1-arg trigonometric opcodes
391
392 [[!table data="""
393 opcode | Description | pseudo-code | Extension |
394 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
395 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
396 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
397 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
398 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
399 FATAN (1) | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
400 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
401 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
402 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
403 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
404 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
405 FATANPI (1) | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
406 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
407 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
408 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
409 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
410 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
411 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
412 """]]
413
414 Note (1): FATAN/FATANPI is a pseudo-op expanding to FATAN2/FATAN2PI (needs deciding)
415
416 # Subsets
417
418 The subsets are organised by hardware complexity, need (3D, HPC), however due to synthesis producing inaccurate results at the range limits, the less common subsets are still required for IEEE754 HPC.
419
420 MALI Midgard, an embedded 3D GPI, for example only has the following opcodes:
421
422 E8 - fatan_pt2
423 F0 - frcp (reciprocal)
424 F2 - frsqrt (inverse square root, 1/sqrt(x))
425 F3 - fsqrt (square root)
426 F4 - fexp2 (2^x)
427 F5 - flog2
428 F6 - fsin
429 F7 - fcos
430 F9 - fatan_pt1
431
432 These in FP32 and FP16 only: no FP32 hardware, at all.
433
434 Vivante 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) has sin, cos, sin2pi, cos2pi, log2, exp, sqrt and rsqrt and recip. It also has fast variants of some of these, as a CSR Mode.
435
436 Also a general point, that customised optimised hardware targetting FP32 3D with less accuracy simply can neither be used for IEEE754 nor for FP64 (except as a starting point for hardware or software driven Newton Raphson or other iterative method).
437
438 Also in cost/area sensitive applications even the extra ROM lookup tables for certain algorithms may be too costly.
439
440 These wildly differing and incompatible driving factors lead to the subset subdivisions, below.
441
442 ## Zftrans
443
444 Zftrans contains standard transcendentals best suited to 3D. They are also the minimum subset for synthesising atan, acos and so on.
445
446
447
448 ## ZftransExt
449
450 LOG, EXP, EXP10, LOG10
451
452 These are extra transcendental functions that are useful, not generally needed for 3D, however for Numerical Computation they may be useful.
453
454 Although they can be synthesised using Ztrans (LOG2 multiplied by a constant), there is both a performance penalty as well as an accuracy penalty towards the limits, which for IEEE754 compliance is unacceptable.
455
456 Their forced inclusion would be inappropriate as it would penalise embedded systems with tight power and area budgets. However if they were completely excluded the HPC applications would be penalised on performance and accuracy.
457
458 Therefore they are their own subset extension.
459
460 ## Ztrigpi vs Ztrignpi
461
462 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
463 * **Ztrignpi**: trig non-xxx-pi sin cos tan
464
465 Ztrignpi are the basic trigonometric functions through which all others could be synthesised. However as can be seen from other sections, there is an accuracy penalty for doing so which will not be acceptable for IEEE754 compliance.
466
467 In the case of the Ztrigpi subset, these are commonly used in for loops with a power of two number of subdivisions, and the cost of multiplying by PI is not an acceptable one.
468
469 In for example CORDIC the multiplication by PI may be moved outside of the hardware algorithm as a loop invariant, with no power or area penalty.
470
471 Thus again, the same argument applies to give Ztrignpi and Ztrigpi as subsets.
472
473 ## Zarctrigpi and Zarctrignpi
474
475 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
476 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
477
478 These are extra trigonometric functions that are useful in some applications.
479
480 Although they can be synthesised using Ztrigpi and Ztrignpi, there is both a performance penalty as well as an accuracy penalty towards the limits, which for IEEE754 compliance is unacceptable, yet is acceptable for 3D.
481
482 Their forced inclusion would be inappropriate as it would penalise embedded systems with tight power and area budgets. However if they were completely excluded the HPC applications would be penalised on performance and accuracy.
483
484 Therefore they are their own subset extension.
485
486 ## Zfhyp
487
488 These are the hyperbolic/inverse-hyperbolic finctions: sinh, cosh, tanh, asinh, acosh, atanh
489
490 They can all be synthesised using LOG, SQRT and so on, so depend on Zftrans.
491 However, once again, at the limits of the range, IEEE754 compliance becomes impossible, and thus a hardware implementation may be required.
492
493
494
495 * **ZftransAdv**: much more complex to implement in hardware
496 * **Zfrsqrt**: Reciprocal square-root.
497
498 # Synthesis, Pseudo-code ops and macro-ops
499
500 The pseudo-ops are best left up to the compiler rather than being actual
501 pseudo-ops, by allocating one scalar FP register for use as a constant
502 (loop invariant) set to "1.0" at the beginning of a function or other
503 suitable code block.
504
505 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
506 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
507
508 FATANPI example pseudo-code:
509
510 lui t0, 0x3F800 // upper bits of f32 1.0
511 fmv.x.s ft0, t0
512 fatan2pi.s rd, rs1, ft0
513
514 Hyperbolic function example (obviates need for Zfhyp except for
515 high-performance or correctly-rounding):
516
517 ASINH( x ) = ln( x + SQRT(x**2+1))
518
519 # Reciprocal
520
521 Used to be an alias. Some implementors may wish to implement divide as y times recip(x).
522
523 Others may have shared hardware for recip and divide, others may not.
524
525 To avoid penalising one implementor over another, recip stays.
526
527 # To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
528
529 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
530 Research needed to ensure that implementors are not compromised by such
531 a decision
532 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
533
534 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
535 > > Likewise for EXP and EXPM1
536
537 > ok, they stay in as real opcodes, then.
538
539 # ATAN / ATAN2 commentary
540
541 Discussion starts here:
542 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
543
544 from Mitch Alsup:
545
546 would like to point out that the general implementations of ATAN2 do a
547 bunch of special case checks and then simply call ATAN.
548
549 double ATAN2( double y, double x )
550 { // IEEE 754-2008 quality ATAN2
551
552 // deal with NANs
553 if( ISNAN( x ) ) return x;
554 if( ISNAN( y ) ) return y;
555
556 // deal with infinities
557 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
558 if( x == +∞ ) return copysign( 0.0, y );
559 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
560 if( x == -∞ ) return copysign( π, y );
561 if( |y|== +∞ ) return copysign( π/2, y );
562
563 // deal with signed zeros
564 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
565 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
566 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
567
568 // calculate ATAN2 textbook style
569 if( x > 0.0 ) return ATAN( |y / x| );
570 if( x < 0.0 ) return π - ATAN( |y / x| );
571 }
572
573
574 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
575 a constant and then call/use ATAN2.
576
577 When one considers an implementation of ATAN, one must consider several
578 ranges of evaluation::
579
580 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
581 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
582 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
583
584 I should point out that the add/sub of π/2 can not lose significance
585 since the result of ATAN(1/x) is bounded 0..π/2
586
587 The bottom line is that I think you are choosing to make too many of
588 these into OpCodes, making the hardware function/calculation unit (and
589 sequencer) more complicated that necessary.
590
591 --------------------------------------------------------
592
593 We therefore I think have a case for bringing back ATAN and including ATAN2.
594
595 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
596
597 (that is the hypothesis, to be evaluated for correctness. feedback requested).
598
599 Thie because we cannot compromise or prioritise one platfrom's speed/accuracy over another. That is not reasonable or desirable, to penalise one implementor over another.
600
601 Thus, all implementors, to keep interoperability, must both have both opcodes and may choose, at the architectural and routing level, which one to implement in terms of the other.
602
603 Allowing implementors to choose to add either opcode and let traps sort it out leaves an uncertainty in the software developer's mind: they cannot trust the hardware, available from many vendors, to be performant right across the board.
604
605 Standards are a pig.
606
607 ---
608
609 I might suggest that if there were a way for a calculation to be performed
610 and the result of that calculation chained to a subsequent calculation
611 such that the precision of the result-becomes-operand is wider than
612 what will fit in a register, then you can dramatically reduce the count
613 of instructions in this category while retaining
614
615 acceptable accuracy:
616
617 z = x / y
618
619 can be calculated as::
620
621 z = x * (1/y)
622
623 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
624 accurate, but GPUs want speed and
625
626 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
627 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
628
629 Given that one has the ability to carry (and process) more fraction bits,
630 one can then do high precision multiplies of π or other transcendental
631 radixes.
632
633 And GPUs have been doing this almost since the dawn of 3D.
634
635 // calculate ATAN2 high performance style
636 // Note: at this point x != y
637 //
638 if( x > 0.0 )
639 {
640 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
641 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
642 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
643 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
644 }
645 if( x < 0.0 )
646 {
647 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
648 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
649 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
650 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
651 }
652
653 This way the adds and subtracts from the constant are not in a precision
654 precarious position.