(no commit message)
[libreriscv.git] / shakti / m_class / libre_3d_gpu.mdwn
1 # Requirements
2
3 test
4
5 ## GPU 3D capabilities
6
7 Based on GC800 the following would be acceptable performance (as would Mali-400):
8
9 * 35 million triangles/sec
10 * 325 milllion pixels/sec
11 * 6 GFLOPS
12
13 ## GPU size and power
14
15 * Basically the power requirement should be at or below around 1 watt in 40nm. Beyond 1 watt it becomes... difficult.
16 * Size is not particularly critical as such but should not be insane.
17
18 Based on GC800 the following would be acceptable area in 40nm:
19
20 * 1.9mm^2 synthesis area
21 * 2.5mm^2 silicon area.
22
23 So here's a table showing embedded cores:
24
25 <https://www.cnx-software.com/2013/01/19/gpus-comparison-arm-mali-vs-vivante-gcxxx-vs-powervr-sgx-vs-nvidia-geforce-ulp/>
26
27 Silicon area corresponds *ROUGHLY* with power usage, but PLEASE do
28 not take that as absolute, because if you read Jeff's Nyuzi 2016 paper
29 you'll see that getting data through the L1/L2 cache barrier is by far
30 and above the biggest eater of power.
31
32 Note lower down that the numbers for Mali-400 are for the *4* core
33 version - Mali-400 (MP4) - where Jeff and I compared Mali-400 SINGLE CORE
34 and discovered that Nyuzi, if 4 parallel Nyuzi cores were put
35 together, would reach only 25% of Mali-400's performance (in about the
36 same silicon area).
37
38 ## Other
39
40 * The deadline is about 12-18 months.
41 * It is highly recommended to use Gallium3D for the software stack.
42 * Software must be licensed under LGPLv2+ or BSD/MIT.
43 * Hardware (RTL) must be licensed under BSD or MIT with no
44 "NON-COMMERCIAL" CLAUSES.
45 * Any proposals will be competing against Vivante GC800 (using Etnaviv driver).
46 * The GPU is integrated (like Mali400). So all that the GPU needs to do is write to an area of memory (framebuffer or area of the framebuffer). The SoC - which in this case has a RISC-V core and has peripherals such as the LCD controller - will take care of the rest.
47 * In this arcitecture, the GPU, the CPU and the peripherals are all on the same AXI4 shared memory bus. They all have access to the same shared DDR3/DDR4 RAM. So as a result the GPU will use AXI4 to write directly to the framebuffer and the rest will be handle by SoC.
48 * The job must be done by a team that shows sufficient expertise to reduce the risk.
49
50 ## Notes
51
52 * The deadline is really tight. If an FPGA (or simulation) plus the basics of the software driver are at least prototyped by then it *might* be ok.
53 * If using Nyuzi as the basis it *might* be possible to begin the software port in parallel because Jeff went to the trouble of writing a cycle-accurate simulation.
54 * I *suspect* it will result in less work to use Gallium3D than, for example, writing an entire OpenGL stack from scratch.
55 * A *demo* should run on an FPGA as an initial. The FPGA is not a priority for assessment, but it would be *nice* if
56 it could fit into a ZC706.
57 * Also if there is parallel hardware obviously it would be nice to be able to demonstrate parallelism to the maximum extend possible. But again, being reasonable, if the GPU is so big that only a single core can fit into even a large FPGA then for an initial demo that would be fine.
58 * Note that no other licenses are acceptable. GPLv2+ is out.
59
60 ## Design decisions and considerations
61
62 Whilst Nyuzi has a big advantage in that it has simuations and also a
63 llvm port and so on, if utilised for this particular RISC-V chip it would
64 mean needing to write a "memory shim" between the general-purpose Nyuzi
65 core and the main processor, i.e. all the shader info, state etc. needs
66 synchronisation hardware (and software).
67
68 That could significantly complicate design, especially of software.
69
70 Whilst i *recommended* Gallium3D there is actually another possible
71 approach: a RISC-V multi-core design which accelerates *software*
72 rendering... including potentially utilising the fact that Gallium3D
73 has a *software* (LLVM) renderer:
74
75 <https://mesa3d.org/llvmpipe.html>
76
77 The general aim of this approach is *not* to have the complexity of
78 transferring significant amounts of data structures to and from disparate
79 cores (one Nyuzi, one RISC-V) but to STAY WITHIN THE RISC-V ARCHITECTURE
80 and simply compile Mesa3D (for RISC-V), gallium3d-llvm (for RISC-V).
81
82 So if considering to base the design on RISC-V, that means turning RISC-V
83 into a vector processor. Now, whilst Hwacha has been located (finally),
84 it's a design that is specifically targetted at supercomputers. I have
85 been taking an alternative approach to vectorisation which is more about
86 *parallelisation* than it is about *vectorisation*.
87
88 It would be great for Simple-V to be given consideration for
89 implementation as the abstraction "API" of Simple-V would greatly simplify
90 the addition process of Custom features such as fixed-function pixel
91 conversion and rasterisation instructions (if those are chosen to be
92 added) and so on. Bear in mind that a high-speed clock rate is NOT a
93 good idea for GPUs (power being a square law), multi-core parallelism
94 and longer SIMD/vectors are much better to consider, instead.
95
96 the PDF/slides on Simple-V is here:
97
98 <http://hands.com/~lkcl/simple_v_chennai_2018.pdf>
99
100 and the assessment, design and implementation is being done here:
101
102 <http://libre-riscv.org/simple_v_extension/>
103
104 ## Q & A
105
106 > Q:
107 >
108 > Do you need a team with good CVs? What about if the
109 > team shows you an acceptable FPGA prototype? I’m talking about a team
110 > of students which do not have big industrial CVs but they know how to
111 > handle this job (just like RocketChip or MIAOW or etc…).
112
113 A:
114
115 That would be fantastic as it would demonstrate not only competence
116 but also commitment. And will have taken out the "risk" of being
117 "unknown", entirely. So that works perfectly for me :) .