# Requirements test ## GPU 3D capabilities Based on GC800 the following would be acceptable performance (as would Mali-400): * 35 million triangles/sec * 325 milllion pixels/sec * 6 GFLOPS ## GPU size and power * Basically the power requirement should be at or below around 1 watt in 40nm. Beyond 1 watt it becomes... difficult. * Size is not particularly critical as such but should not be insane. Based on GC800 the following would be acceptable area in 40nm: * 1.9mm^2 synthesis area * 2.5mm^2 silicon area. So here's a table showing embedded cores: Silicon area corresponds *ROUGHLY* with power usage, but PLEASE do not take that as absolute, because if you read Jeff's Nyuzi 2016 paper you'll see that getting data through the L1/L2 cache barrier is by far and above the biggest eater of power. Note lower down that the numbers for Mali-400 are for the *4* core version - Mali-400 (MP4) - where Jeff and I compared Mali-400 SINGLE CORE and discovered that Nyuzi, if 4 parallel Nyuzi cores were put together, would reach only 25% of Mali-400's performance (in about the same silicon area). ## Other * The deadline is about 12-18 months. * It is highly recommended to use Gallium3D for the software stack. * Software must be licensed under LGPLv2+ or BSD/MIT. * Hardware (RTL) must be licensed under BSD or MIT with no "NON-COMMERCIAL" CLAUSES. * Any proposals will be competing against Vivante GC800 (using Etnaviv driver). * The GPU is integrated (like Mali400). So all that the GPU needs to do is write to an area of memory (framebuffer or area of the framebuffer). The SoC - which in this case has a RISC-V core and has peripherals such as the LCD controller - will take care of the rest. * In this arcitecture, the GPU, the CPU and the peripherals are all on the same AXI4 shared memory bus. They all have access to the same shared DDR3/DDR4 RAM. So as a result the GPU will use AXI4 to write directly to the framebuffer and the rest will be handle by SoC. * The job must be done by a team that shows sufficient expertise to reduce the risk. ## Notes * The deadline is really tight. If an FPGA (or simulation) plus the basics of the software driver are at least prototyped by then it *might* be ok. * If using Nyuzi as the basis it *might* be possible to begin the software port in parallel because Jeff went to the trouble of writing a cycle-accurate simulation. * I *suspect* it will result in less work to use Gallium3D than, for example, writing an entire OpenGL stack from scratch. * A *demo* should run on an FPGA as an initial. The FPGA is not a priority for assessment, but it would be *nice* if it could fit into a ZC706. * Also if there is parallel hardware obviously it would be nice to be able to demonstrate parallelism to the maximum extend possible. But again, being reasonable, if the GPU is so big that only a single core can fit into even a large FPGA then for an initial demo that would be fine. * Note that no other licenses are acceptable. GPLv2+ is out. ## Design decisions and considerations Whilst Nyuzi has a big advantage in that it has simuations and also a llvm port and so on, if utilised for this particular RISC-V chip it would mean needing to write a "memory shim" between the general-purpose Nyuzi core and the main processor, i.e. all the shader info, state etc. needs synchronisation hardware (and software). That could significantly complicate design, especially of software. Whilst i *recommended* Gallium3D there is actually another possible approach: a RISC-V multi-core design which accelerates *software* rendering... including potentially utilising the fact that Gallium3D has a *software* (LLVM) renderer: The general aim of this approach is *not* to have the complexity of transferring significant amounts of data structures to and from disparate cores (one Nyuzi, one RISC-V) but to STAY WITHIN THE RISC-V ARCHITECTURE and simply compile Mesa3D (for RISC-V), gallium3d-llvm (for RISC-V). So if considering to base the design on RISC-V, that means turning RISC-V into a vector processor. Now, whilst Hwacha has been located (finally), it's a design that is specifically targetted at supercomputers. I have been taking an alternative approach to vectorisation which is more about *parallelisation* than it is about *vectorisation*. It would be great for Simple-V to be given consideration for implementation as the abstraction "API" of Simple-V would greatly simplify the addition process of Custom features such as fixed-function pixel conversion and rasterisation instructions (if those are chosen to be added) and so on. Bear in mind that a high-speed clock rate is NOT a good idea for GPUs (power being a square law), multi-core parallelism and longer SIMD/vectors are much better to consider, instead. the PDF/slides on Simple-V is here: and the assessment, design and implementation is being done here: ## Q & A > Q: > > Do you need a team with good CVs? What about if the > team shows you an acceptable FPGA prototype? I’m talking about a team > of students which do not have big industrial CVs but they know how to > handle this job (just like RocketChip or MIAOW or etc…). A: That would be fantastic as it would demonstrate not only competence but also commitment. And will have taken out the "risk" of being "unknown", entirely. So that works perfectly for me :) .