(no commit message)
[libreriscv.git] / 3d_gpu / discussion.mdwn
1 # Notes
2
3 at FOSDEM 2018 when Yunsup and the team announced the U540 there was
4 some discussion about this: it was one of the questions asked. one of
5 the possibilities raised there was that maddog was heading something:
6 i've looked for that effort, and have not been able to find it [jon is
7 getting quite old, now, bless him. he had to have an operation last
8 year. he's recovered well].
9
10 also at the Barcelona Conference i mentioned in the
11 very-very-very-rapid talk on the Libre RISC-V chip that i have been
12 tasked with, that if there is absolutely absolutely no other option,
13 it will use Vivante GC800 (and, obviously, use etnaviv). what *that*
14 means is that there's a definite budget of USD $250,000 available
15 which the (anonymous) sponsor is definitely willing to spend... so if
16 anyone can come up with an alternative that is entirely libre and
17 open, i can put that initiative to the sponsor for evaluation.
18
19 basically i've been looking at this for several months, so have been
20 talking to various people (jeff bush from nyuzi [1] and chiselgpu [2],
21 frank from gplgpu [3], VRG for MIAOW [4]) to get a feel for what would
22 be involved.
23
24 * miaow is just an OpenCL engine that is compatible with a subset of
25 AMD/ATI's OpenCL assembly code. it is NOT a GPU. they have
26 preliminary plans to *make* one... however the development process is
27 not open. we'll hear about it if and when it succeeds, probably as
28 part of a published research paper.
29
30 * nyuzi is a *modern* "software shader / renderer" and is a
31 replication of the intel larrabee architecture. it explored the
32 concept of doing recursive software-driven rasterisation (as did
33 larrabee) where hardware rasterisation uses brute force and often
34 wastes time and power. jeff went to a lot of trouble to find out
35 *why* intel's researchers were um "not permitted" to actually put
36 performance numbers into their published papers. he found out why :)
37 one of the main facts that jeff's research reveals (and there are a
38 lot of them) is that most of the energy of a GPU is spent getting data
39 each way past the L2/L1 cache barrier, and secondly much of the time
40 (if doing software-only rendering) you have several instruction cycles
41 where in a hardware design you issue one and a separate pipeline takes
42 over (see videocore-iv below)
43
44 * chiselgpu was an additional effort by jeff to create the absolute
45 minimum required tile-based "triangle renderer" in hardware, for
46 comparative purposes in the nyuzi raster engine research. synthesis
47 of such a block he pointed out to me would actually be *enormous*,
48 despite appearances from how little code there is in the chiselgpu
49 repository. in his paper he mentions that the majority of the time
50 when such hardware-renderers are deployed, the rest of the GPU is
51 really struggling to keep up feeding the hardware-rasteriser, so you
52 have to put in multiple threads, and that brings its own problems.
53 it's all in the paper, it's fascinating stuff.
54
55 * gplgpu was done by one of the original developers of the "Number
56 Nine" GPU, and is based around a "fixed function" design and as such
57 is no longer considered suitable for use in the modern 3D developer
58 community (they hate having to code for it), and its performance would
59 be *really* hard to optimise and extend. however in speaking to jeff,
60 who analysed it quite comprehensively, he said that there were a large
61 number of features (4-tuple floating-point colour to 16/32-bit ARGB
62 fixed functions) that have retained a presence in modern designs, so
63 it's still useful for inspiration and analysis purposes. you can see
64 jeff's analysis here [7]
65
66 * an extremely useful resource has been the videocore-iv project [8]
67 which has collected documentation and part-implemented compiler tools.
68 the architecture is quite interesting, it's a hybrid of a
69 Software-driven Vector architecture similar to Nyuzi plus
70 fixed-functions on separate pipelines such as that "take 4-tuple FP,
71 turn it into fixed-point ARGB and overlay it into the tile"
72 instruction. that's done as a *single* instruction to cover i think 4
73 pixels, where Nyuzi requires an average of 4 cycles per pixel. the
74 other thing about videocore-iv is that there is a separate internal
75 "scratch" memory area of size 4x4 (x32-bit) which is the "tile" area,
76 and focussing on filling just that is one of the things that saves
77 power. jeff did a walkthrough, you can read it here [10] [11]
78
79 so on this basis i have been investigating a couple of proposals for
80 RISC-V extensions: one is Simple-V [9] and the other is a *small*
81 general-purpose memory-scratch area extension, which would be
82 accessible only on the *other* side of the L1/L2 cache area and *ONLY*
83 accessible by an individual core [or its hyperthreads]. small would
84 be essential because if a context-switch occurs it would be necessary
85 to swap the scratch-area out to main memory (and back).
86 general-purpose so that it's useful and useable in other contexts and
87 situations.
88
89 whilst there are many additional reasons - justifications that make
90 it attractive for *general-purpose* usage (such as accidentally
91 providing LD.MULTI and ST.MULTI for context-switching and efficient
92 function call parameter stack storing, and an accidental
93 single-instruction "memcpy" and "memzero") - the primary driver behind
94 Simple-V has been as the basis for turning RISC-V into an
95 embedded-style (low-power) GPU (and also a VPU).
96
97 one of the things that's lacking from
98 [RVV](https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc)
99 is parallelisation of
100 Bit-Manipulation. RVV has been primarily designed based on input from
101 the Supercomputer community, and as such it's *incredible*.
102 absolutely amazing... but only desirable to implementt if you need to
103 build a Supercomputer.
104
105 Simple-V i therefore designed to parallelise *everything*. custom
106 extensions, future extensions, current extensions, current
107 instructions, *everything*. RVV, once it's been implemented in gcc
108 for example, would require heavy-customisation to support e.g.
109 Bit-Manipulation, would require special Bit-Manipulation Vector
110 instructions to be added *to RVV*... all of which would need to AGAIN
111 go through the Extension Proposal process... you can imagine how that
112 would go, and the subsequent cost of maintenance of gcc, binutils and
113 so on as a long-term preliminary (or if the extension to RVV is not
114 accepted, after all the hard work) even a permanent hard-fork.
115
116 in other words once you've been through the "Extension Proposal
117 Process" with Simple-V, it need never be done again, not for one
118 single parallel / vector / SIMD instruction, ever again.
119
120 that would include for example creating a fixed-function 3D "FP to
121 ARGB" custom instruction. a custom extension with special 3D
122 pipelines would, with Simple-V, not need to also have to worry about
123 how those operations would be parallelised.
124
125 this is not a new concept: it's borrowed directly from videocore-iv
126 (which in turn probably borrowed it from somewhere else).
127 videocore-iv call it "virtual parallelism". the Vector Unit
128 *actually* has a 4-wide FPU for certain heavily-used operations such
129 as ADD, and a ***ONE*** wide FPU for less-used operations such as
130 RECIPSQRT.
131
132 however at the *instruction* level each of those operations,
133 regardless of whether they're heavily-used or less-used they *appear*
134 to be 16 parallel operations all at once, as far as the compiler and
135 assembly writers are concerned. Simple-V just borrows this exact same
136 concept and lets implementors decide where to deploy it, to best
137 advantage.
138
139
140 > 2. If it’s a good idea to implement, are there any projects currently
141 > working on it?
142
143 i haven't been able to find any: if you do please do let me know, i
144 would like to speak to them and find out how much time and money they
145 would need to complete the work.
146
147 > If the answer is yes, would you mind mention the project’s name and
148 > website?
149 >
150 > If the answer is no, are there any special reasons that nobody not
151 > implement it yet?
152
153 it's damn hard, it requires a *lot* of resources, and if the idea is
154 to make it entirely libre-licensed and royalty-free there is an extra
155 step required which a proprietary GPU company would not normally do,
156 and that is to follow the example of the BBC when they created their
157 own Video CODEC called Dirac [5].
158
159 what the BBC did there was create the algorithm *exclusively* from
160 prior art and expired patents... they applied for their own patents...
161 and then *DELIBERATELY* let them lapse. the way that the patent
162 system works, the patents will *still be published*, there will be an
163 official priority filing date in the patent records with the full text
164 and details of the patents.
165
166 this strategy, where you MUST actually pay for the first filing
167 otherwise the records are REMOVED and never published, acts as a way
168 of preventing and prohibiting unscrupulous people from grabbing the
169 whitepapers and source code, and trying to patent details of the
170 algorithm themselves just like Google did very recently [6]
171
172 * [0] https://www.youtube.com/watch?v=7z6xjIRXcp4
173 * [1] https://github.com/jbush001/NyuziProcessor/wiki
174 * [2] https://github.com/asicguy/gplgpu
175 * [3] https://github.com/jbush001/ChiselGPU/
176 * [4] http://miaowgpu.org/
177 * [5] https://en.wikipedia.org/wiki/Dirac_(video_compression_format)
178 * [6] https://yro.slashdot.org/story/18/06/11/2159218/inventor-says-google-is-patenting-his-public-domain-work
179 * [7] https://jbush001.github.io/2016/07/24/gplgpu-walkthrough.html
180 * [8] https://github.com/hermanhermitage/videocoreiv/wiki/VideoCore-IV-Programmers-Manual
181 * [9] libre-riscv.org/simple_v_extension/
182 * [10] https://jbush001.github.io/2016/03/02/videocore-qpu-pipeline.html
183 * [11] https://jbush001.github.io/2016/02/27/life-of-triangle.html
184 * OpenPiton https://openpiton-blog.princeton.edu/2018/11/announcing-openpiton-with-ariane/
185