# Simple-V (Parallelism Extension Proposal) Specification * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton * Status: DRAFTv0.5 * Last edited: 19 jun 2019 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]] With thanks to: * Allen Baum * Bruce Hoult * comp.arch * Jacob Bachmeyer * Guy Lemurieux * Jacob Lifshay * Terje Mathisen * The RISC-V Founders, without whom this all would not be possible. [[!toc ]] # Summary and Background: Rationale Simple-V is a uniform parallelism API for RISC-V hardware that has several unplanned side-effects including code-size reduction, expansion of HINT space and more. The reason for creating it is to provide a manageable way to turn a pre-existing design into a parallel one, in a step-by-step incremental fashion, allowing the implementor to focus on adding hardware where it is needed and necessary. The primary target is for mobile-class 3D GPUs and VPUs, with secondary goals being to reduce executable size and reduce context-switch latency. Critically: **No new instructions are added**. The parallelism (if any is implemented) is implicitly added by tagging *standard* scalar registers for redirection. When such a tagged register is used in any instruction, it indicates that the PC shall **not** be incremented; instead a loop is activated where *multiple* instructions are issued to the pipeline (as determined by a length CSR), with contiguously incrementing register numbers starting from the tagged register. When the last "element" has been reached, only then is the PC permitted to move on. Thus Simple-V effectively sits (slots) *in between* the instruction decode phase and the ALU(s). The barrier to entry with SV is therefore very low. The minimum compliant implementation is software-emulation (traps), requiring only the CSRs and CSR tables, and that an exception be thrown if an instruction's registers are detected to have been tagged. The looping that would otherwise be done in hardware is thus carried out in software, instead. Whilst much slower, it is "compliant" with the SV specification, and may be suited for implementation in RV32E and also in situations where the implementor wishes to focus on certain aspects of SV, without unnecessary time and resources into the silicon, whilst also conforming strictly with the API. A good area to punt to software would be the polymorphic element width capability for example. Hardware Parallelism, if any, is therefore added at the implementor's discretion to turn what would otherwise be a sequential loop into a parallel one. To emphasise that clearly: Simple-V (SV) is *not*: * A SIMD system * A SIMT system * A Vectorisation Microarchitecture * A microarchitecture of any specific kind * A mandary parallel processor microarchitecture of any kind * A supercomputer extension SV does **not** tell implementors how or even if they should implement parallelism: it is a hardware "API" (Application Programming Interface) that, if implemented, presents a uniform and consistent way to *express* parallelism, at the same time leaving the choice of if, how, how much, when and whether to parallelise operations **entirely to the implementor**. # Basic Operation The principle of SV is as follows: * CSRs indicating which registers are "tagged" as "vectorised" (potentially parallel, depending on the microarchitecture) must be set up * A "Vector Length" CSR is set, indicating the span of any future "parallel" operations. * A **scalar** operation, just after the decode phase and before the execution phase, checks the CSR register tables to see if any of its registers have been marked as "vectorised" * If so, a hardware "macro-unrolling loop" is activated, of length VL, that effectively issues **multiple** identical instructions using contiguous sequentially-incrementing registers. **Whether they be executed sequentially or in parallel or a mixture of both or punted to software-emulation in a trap handler is entirely up to the implementor**. In this way an entire scalar algorithm may be vectorised with the minimum of modification to the hardware and to compiler toolchains. There are **no** new opcodes. # CSRs For U-Mode there are two CSR key-value stores needed to create lookup tables which are used at the register decode phase. * A register CSR key-value table (typically 8 32-bit CSRs of 2 16-bits each) * A predication CSR key-value table (again, 8 32-bit CSRs of 2 16-bits each) * Small U-Mode and S-Mode register and predication CSR key-value tables (2 32-bit CSRs of 2x 16-bit entries each). * An optional "reshaping" CSR key-value table which remaps from a 1D linear shape to 2D or 3D, including full transposition. There are also four additional CSRs for User-Mode: * CFG subsets the CSR tables * MVL (the Maximum Vector Length) * VL (which has different characteristics from standard CSRs) * STATE (useful for saving and restoring during context switch, and for providing fast transitions) There are also three additional CSRs for Supervisor-Mode: * SMVL * SVL * SSTATE And likewise for M-Mode: * MMVL * MVL * MSTATE Both Supervisor and M-Mode have their own (small) CSR register and predication tables of only 4 entries each. The access pattern for these groups of CSRs in each mode follows the same pattern for other CSRs that have M-Mode and S-Mode "mirrors": * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct. * In S-Mode, accessing and changing of the M-Mode CSRs is identical to changing the S-Mode CSRs. Accessing and changing the U-Mode CSRs is permitted. * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs is prohibited. In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the M-Mode MVL, the M-Mode STATE and so on that influences the processor behaviour. Likewise for S-Mode, and likewise for U-Mode. This has the interesting benefit of allowing M-Mode (or S-Mode) to be set up, for context-switching to take place, and, on return back to the higher privileged mode, the CSRs of that mode will be exactly as they were. Thus, it becomes possible for example to set up CSRs suited best to aiding and assisting low-latency fast context-switching *once and only once*, without the need for re-initialising the CSRs needed to do so. ## CFG This CSR may be used to switch between subsets of the CSR Register and Predication Tables: it is kept to 5 bits so that a single CSRRWI instruction can be used. A setting of all ones is reserved to indicate that SimpleV is disabled. | (4..3) | (2...0) | | ------ | ------- | | size | bank | Bank is 3 bits in size, and indicates the starting index of the CSR Register and Predication Table entries that are "enabled". Given that each CSR table row is 16 bits and contains 2 CAM entries each, there are only 8 CSRs to cover in each table, so 8 bits is sufficient. Size is 2 bits. With the exception of when bank == 7 and size == 3, the number of elements enabled is taken by right-shifting 2 by size: | size | elements | | ------ | -------- | | 0 | 2 | | 1 | 4 | | 2 | 8 | | 3 | 16 | Given that there are 2 16-bit CAM entries per CSR table row, this may also be viewed as the number of CSR rows to enable, by raising size to the power of 2. Examples: * When bank = 0 and size = 3, SVREGCFG0 through to SVREGCFG7 are enabled, and SVPREDCFG0 through to SVPREGCFG7 are enabled. * When bank = 1 and size = 3, SVREGCFG1 through to SVREGCFG7 are enabled, and SVPREDCFG1 through to SVPREGCFG7 are enabled. * When bank = 3 and size = 0, SVREGCFG3 and SVPREDCFG3 are enabled. * When bank = 3 and size = 1, SVREGCFG3-4 and SVPREDCFG3-4 are enabled. * When bank = 7 and size = 1, SVREGCFG7 and SVPREDCFG7 are enabled (because there are only 8 32-bit CSRs there does not exist a SVREGCFG8 or SVPREDCFG8 to enable). * When bank = 7 and size = 3, SimpleV is entirely disabled. In this way it is possible to enable and disable SimpleV with a single instruction, and, furthermore, on context-switching the quantity of CSRs to be saved and restored is greatly reduced. ## MAXVECTORLENGTH (MVL) MAXVECTORLENGTH is the same concept as MVL in RVV, except that it is variable length and may be dynamically set. MVL is however limited to the regfile bitwidth XLEN (1-32 for RV32, 1-64 for RV64 and so on). The reason for setting this limit is so that predication registers, when marked as such, may fit into a single register as opposed to fanning out over several registers. This keeps the implementation a little simpler. The other important factor to note is that the actual MVL is **offset by one**, so that it can fit into only 6 bits (for RV64) and still cover a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually means that MVL==1. When setting the MVL CSR to 3, this actually means that MVL==4, and so on. This is expressed more clearly in the "pseudocode" section, where there are subtle differences between CSRRW and CSRRWI. ## Vector Length (VL) VSETVL is slightly different from RVV. Like RVV, VL is set to be within the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN) VL = rd = MIN(vlen, MVL) where 1 <= MVL <= XLEN However just like MVL it is important to note that the range for VL has subtle design implications, covered in the "CSR pseudocode" section The fixed (specific) setting of VL allows vector LOAD/STORE to be used to switch the entire bank of registers using a single instruction (see Appendix, "Context Switch Example"). The reason for limiting VL to XLEN is down to the fact that predication bits fit into a single register of length XLEN bits. The second change is that when VSETVL is requested to be stored into x0, it is *ignored* silently (VSETVL x0, x5) The third and most important change is that, within the limits set by MVL, the value passed in **must** be set in VL (and in the destination register). This has implication for the microarchitecture, as VL is required to be set (limits from MVL notwithstanding) to the actual value requested. RVV has the option to set VL to an arbitrary value that suits the conditions and the micro-architecture: SV does *not* permit this. The reason is so that if SV is to be used for a context-switch or as a substitute for LOAD/STORE-Multiple, the operation can be done with only 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1}, single LD/ST operation). If VL does *not* get set to the register file length when VSETVL is called, then a software-loop would be needed. To avoid this need, VL *must* be set to exactly what is requested (limits notwithstanding). Therefore, in turn, unlike RVV, implementors *must* provide pseudo-parallelism (using sequential loops in hardware) if actual hardware-parallelism in the ALUs is not deployed. A hybrid is also permitted (as used in Broadcom's VideoCore-IV) however this must be *entirely* transparent to the ISA. The fourth change is that VSETVL is implemented as a CSR, where the behaviour of CSRRW (and CSRRWI) must be changed to specifically store the *new* value in the destination register, **not** the old value. Where context-load/save is to be implemented in the usual fashion by using a single CSRRW instruction to obtain the old value, the *secondary* CSR must be used (SVSTATE). This CSR behaves exactly as standard CSRs, and contains more than just VL. One interesting side-effect of using CSRRWI to set VL is that this may be done with a single instruction, useful particularly for a context-load/save. There are however limitations: CSRWI's immediate is limited to 0-31 (representing VL=1-32). Note that when VL is set to 1, all parallel operations cease: the hardware loop is reduced to a single element: scalar operations. ## STATE This is a standard CSR that contains sufficient information for a full context save/restore. It contains (and permits setting of) MVL, VL, CFG, the destination element offset of the current parallel instruction being executed, and, for twin-predication, the source element offset as well. Interestingly it may hypothetically also be used to make the immediately-following instruction to skip a certain number of elements, however the recommended method to do this is predication or using the offset mode of the REMAP CSRs. Setting destoffs and srcoffs is realistically intended for saving state so that exceptions (page faults in particular) may be serviced and the hardware-loop that was being executed at the time of the trap, from user-mode (or Supervisor-mode), may be returned to and continued from where it left off. The reason why this works is because setting User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and is entirely why M-Mode and S-Mode have their own STATE CSRs). The format of the STATE CSR is as follows: | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) | | -------- | -------- | -------- | -------- | ------- | ------- | | size | bank | destoffs | srcoffs | vl | maxvl | When setting this CSR, the following characteristics will be enforced: * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN * **VL** will be truncated (after offset) to be within the range 1 to MAXVL * **srcoffs** will be truncated to be within the range 0 to VL-1 * **destoffs** will be truncated to be within the range 0 to VL-1 ## MVL, VL and CSR Pseudocode The pseudo-code for get and set of VL and MVL are as follows: set_mvl_csr(value, rd): regs[rd] = MVL MVL = MIN(value, MVL) get_mvl_csr(rd): regs[rd] = VL set_vl_csr(value, rd): VL = MIN(value, MVL) regs[rd] = VL # yes returning the new value NOT the old CSR get_vl_csr(rd): regs[rd] = VL Note that where setting MVL behaves as a normal CSR, unlike standard CSR behaviour, setting VL will return the **new** value of VL **not** the old one. For CSRRWI, the range of the immediate is restricted to 5 bits. In order to maximise the effectiveness, an immediate of 0 is used to set VL=1, an immediate of 1 is used to set VL=2 and so on: CSRRWI_Set_MVL(value): set_mvl_csr(value+1, x0) CSRRWI_Set_VL(value): set_vl_csr(value+1, x0) However for CSRRW the following pseudocide is used for MVL and VL, where setting the value to zero will cause an exception to be raised. The reason is that if VL or MVL are set to zero, the STATE CSR is not capable of returning that value. CSRRW_Set_MVL(rs1, rd): value = regs[rs1] if value == 0: raise Exception set_mvl_csr(value, rd) CSRRW_Set_VL(rs1, rd): value = regs[rs1] if value == 0: raise Exception set_vl_csr(value, rd) In this way, when CSRRW is utilised with a loop variable, the value that goes into VL (and into the destination register) may be used in an instruction-minimal fashion: CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt} CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt} CSRRWI MVL, 3 # sets MVL == **4** (not 3) j zerotest # in case loop counter a0 already 0 loop: CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0) ld a3, a1 # load 4 registers a3-6 from x slli t1, t0, 3 # t1 = vl * 8 (in bytes) ld a7, a2 # load 4 registers a7-10 from y add a1, a1, t1 # increment pointer to x by vl*8 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y) sub a0, a0, t0 # n -= vl (t0) st a7, a2 # store 4 registers a7-10 to y add a2, a2, t1 # increment pointer to y by vl*8 zerotest: bnez a0, loop # repeat if n != 0 With the STATE CSR, just like with CSRRWI, in order to maximise the utilisation of the limited bitspace, "000000" in binary represents VL==1, "00001" represents VL==2 and so on (likewise for MVL): CSRRW_Set_SV_STATE(rs1, rd): value = regs[rs1] get_state_csr(rd) MVL = set_mvl_csr(value[11:6]+1) VL = set_vl_csr(value[5:0]+1) CFG = value[28:24]>>24 destoffs = value[23:18]>>18 srcoffs = value[23:18]>>12 get_state_csr(rd): regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 | (destoffs)<<18 | (CFG)<<24 return regs[rd] In both cases, whilst CSR read of VL and MVL return the exact values of VL and MVL respectively, reading and writing the STATE CSR returns those values **minus one**. This is absolutely critical to implement if the STATE CSR is to be used for fast context-switching. ## Register CSR key-value (CAM) table The purpose of the Register CSR table is four-fold: * To mark integer and floating-point registers as requiring "redirection" if it is ever used as a source or destination in any given operation. This involves a level of indirection through a 5-to-7-bit lookup table, such that **unmodified** operands with 5 bit (3 for Compressed) may access up to **128** registers. * To indicate whether, after redirection through the lookup table, the register is a vector (or remains a scalar). * To over-ride the implicit or explicit bitwidth that the operation would normally give the register. 16 bit format: | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) | | ------ | | - | - | - | ------ | ------- | | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey | | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey | | .. | | isvec.. | regidx.. | i/f | vew.. | regkey | | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey | 8 bit format: | RegCAM | | 7 | (6..5) | (4..0) | | ------ | | - | ------ | ------- | | 0 | | i/f | vew0 | regnum | i/f is set to "1" to indicate that the redirection/tag entry is to be applied to integer registers; 0 indicates that it is relevant to floating-point registers. The 8 bit format is used for a much more compact expression. "isvec" is implicit and, as in [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode. vew has the following meanings, indicating that the instruction's operand size is "over-ridden" in a polymorphic fashion: | vew | bitwidth | | --- | ------------------- | | 00 | default (XLEN/FLEN) | | 01 | 8 bit | | 10 | 16 bit | | 11 | 32 bit | As the above table is a CAM (key-value store) it may be appropriate (faster, implementation-wise) to expand it as follows: struct vectorised fp_vec[32], int_vec[32]; for (i = 0; i < 16; i++) // 16 CSRs? tb = int_vec if CSRvec[i].type == 0 else fp_vec idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode tb[idx].elwidth = CSRvec[i].elwidth tb[idx].regidx = CSRvec[i].regidx // indirection tb[idx].isvector = CSRvec[i].isvector // 0=scalar tb[idx].packed = CSRvec[i].packed // SIMD or not The actual size of the CSR Register table depends on the platform and on whether other Extensions are present (RV64G, RV32E, etc.). For details see "Subsets" section. There are two CSRs (per privilege level) for adding to and removing entries from the table, which, conceptually may be viewed as either a register window (similar to SPARC) or as the "top of a stack". * SVREGTOP will push or pop entries onto the top of the "stack" (highest non-zero indexed entry in the table) * SVREGBOT will push or pop entries from the bottom (always element indexed as zero. In addition, note that CSRRWI behaviour is completely different from CSRRW when writing to these two CSR registers. The CSRRW behaviour: the src register is subdivided into 16-bit chunks, and each non-zero chunk is pushed/popped separately. The CSRRWI behaviour: the immediate indicates the number of entries in the table to be popped. CSRRWI: * The src register indicates how many entries to pop from the CAM table. * "CSRRWI SVREGTOP, 3" indicates that the top 3 entries are to be zero'd and returned as the CSR return result. The top entry is returned in bits 0-15, the next entry down in bits 16-31, and when XLEN==64, an extra 2 entries are also returned. * "CSRRWI SVREGBOT, 3" indicates that the bottom 3 entries are to be returned, and the entries with indices above 3 are to be shuffled down. The first entry to be popped off the bottom is returned in bits 0-15, the second entry as bits 16-31 and so on. * If XLEN==32, only a maximum of 2 entries may be returned (and shuffled). If XLEN==64, only a maximum of 4 entries may be returned * If however the destination register is x0 (zero), then the exact number of entries requested will be removed (shuffled down). CSRRW when src == 0: * When the src register is all zeros, this is a request to pop one and only one 16-bit element from the table. * "CSRRW SVREGTOP, 0" will return (and clear) the highest non-zero 16-bit entry in the table * "CSRRW SVREGBOT, 0" will return (and clear) the zero'th 16-bit entry in the table, and will shuffle down all other entries (if any) by one index. CSRRW when src != 0: All other CSRRW behaviours are a "loop", taking 16-bits at a time from the src register. Obviously, for XLEN=32 that can only be up to 2 16-bit entries, however for XLEN=64 it can be up to 4. * When the src 16-bit chunk is non-zero and there already exists an entry with the exact same "regkey" (bits 0-4), the entry is **updated**. No other modifications are made. * When the 16-bit chunk is non-zero and there does not exist an entry, the new value will be placed at the end (in the highest non-zero slot), or at the beginning (shuffling up all other entries to make room). * If there is not enough room, the entry at the opposite end will become part of the CSR return result. * The process is repeated for the next 16-bit chunk (starting with bits 0-15 and moving next to 16-31 and so on), until the limit of XLEN is reached or a chunk is all-zeros, at which point the looping stops. * Any 16-bit entries that are pushed out of the stack (from either end) are concatenated in order (first entry pushed out is bits 0-15 of the return result). What this behaviour basically does is allow the CAM table to effectively be like the top entries of a stack. Entries that get returned from CSRRW SVREGTOP can be *actually* stored on the stack, such that after a function call exits, CSRRWI SVREGTOP may be used to delete the callee's CAM entries, and the caller's entries may then be pushed *back*, using CSRRW SVREGBOT. Context-switching may be carried out in a loop, where CSRRWI may be called to "pop" values that are tested for being non-zero, and transferred onto the stack with C.SWSP using only around 4-5 instructions. CSRRW may then be used in combination with C.LWSP to get the CAM entries off the stack and back into the CAM table, again with a loop using only around 4-5 instructions. Contrast this with needing around 6-7 instructions (8-9 without SV on RV64, and 16-17 on RV32) to do a context-switch of fixed-address CSRs: a sequence of fixed-address C.LWSP with fixed offsets plus fixed-address CSRRWs, and that is without testing if any of the entries are zero or not. ## Predication CSR TODO: update CSR tables, now 7-bit for regidx The Predication CSR is a key-value store indicating whether, if a given destination register (integer or floating-point) is referred to in an instruction, it is to be predicated. Tt is particularly important to note that the *actual* register used can be *different* from the one that is in the instruction, due to the redirection through the lookup table. * regidx is the actual register that in combination with the i/f flag, if that integer or floating-point register is referred to, results in the lookup table being referenced to find the predication mask to use on the operation in which that (regidx) register has been used * predidx (in combination with the bank bit in the future) is the *actual* register to be used for the predication mask. Note: in effect predidx is actually a 6-bit register address, as the bank bit is the MSB (and is nominally set to zero for now). * inv indicates that the predication mask bits are to be inverted prior to use *without* actually modifying the contents of the register itself. * zeroing is either 1 or 0, and if set to 1, the operation must place zeros in any element position where the predication mask is set to zero. If zeroing is set to 0, unpredicated elements *must* be left alone. Some microarchitectures may choose to interpret this as skipping the operation entirely. Others which wish to stick more closely to a SIMD architecture may choose instead to interpret unpredicated elements as an internal "copy element" operation (which would be necessary in SIMD microarchitectures that perform register-renaming) * "packed" indicates if the register is to be interpreted as SIMD i.e. containing multiple contiguous elements of size equal to "bitwidth". (Note: in earlier drafts this was in the Register CSR table. However after extending to 7 bits there was not enough space. To use "unpredicated" packed SIMD, set the predicate to x0 and set "invert". This has the effect of setting a predicate of all 1s) 16 bit format: | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 | | ----- | - | - | - | - | ------- | ------- | | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd | | 1 | predkey | zero1 | inv1 | i/f | regidx | packed1 | | ... | predkey | ..... | .... | i/f | ....... | ....... | | 15 | predkey | zero15 | inv15 | i/f | regidx | packed15| 8 bit format: | PrCSR | 7 | 6 | 5 | (4..0) | | ----- | - | - | - | ------- | | 0 | zero0 | inv0 | i/f | regnum | The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication. The 16 bit Predication CSR Table is a key-value store, so implementation-wise it will be faster to turn the table around (maintain topologically equivalent state): struct pred { bool zero; bool inv; bool enabled; int predidx; // redirection: actual int register to use } struct pred fp_pred_reg[32]; // 64 in future (bank=1) struct pred int_pred_reg[32]; // 64 in future (bank=1) for (i = 0; i < 16; i++) tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg; idx = CSRpred[i].regidx tb[idx].zero = CSRpred[i].zero tb[idx].inv = CSRpred[i].inv tb[idx].predidx = CSRpred[i].predidx tb[idx].enabled = true So when an operation is to be predicated, it is the internal state that is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following pseudo-code for operations is given, where p is the explicit (direct) reference to the predication register to be used: for (int i=0; i (Note: both the REMAP and SHAPE sections are best read after the rest of the document has been read) There is one 32-bit CSR which may be used to indicate which registers, if used in any operation, must be "reshaped" (re-mapped) from a linear form to a 2D or 3D transposed form, or "offset" to permit arbitrary access to elements within a register. The 32-bit REMAP CSR may reshape up to 3 registers: | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 | | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- | | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 | regidx0-2 refer not to the Register CSR CAM entry but to the underlying *real* register (see regidx, the value) and consequently is 7-bits wide. When set to zero (referring to x0), clearly reshaping x0 is pointless, so is used to indicate "disabled". shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved. Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero. It is anticipated that these specialist CSRs not be very often used. Unlike the CSR Register and Predication tables, the REMAP CSRs use the full 7-bit regidx so that they can be set once and left alone, whilst the CSR Register entries pointing to them are disabled, instead. ## SHAPE 1D/2D/3D vector-matrix remapping CSRs (Note: both the REMAP and SHAPE sections are best read after the rest of the document has been read) There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each, which have the same format. When each SHAPE CSR is set entirely to zeros, remapping is disabled: the register's elements are a linear (1D) vector. | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 | | ------- | -- | ------- | -- | ------- | -- | ------- | | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz | offs is a 3-bit field, spread out across bits 7, 15 and 23, which is added to the element index during the loop calculation. xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates that the array dimensionality for that dimension is 1. A value of xdimsz=2 would indicate that in the first dimension there are 3 elements in the array. The format of the array is therefore as follows: array[xdim+1][ydim+1][zdim+1] However whilst illustrative of the dimensionality, that does not take the "permute" setting into account. "permute" may be any one of six values (0-5, with values of 6 and 7 being reserved, and not legal). The table below shows how the permutation dimensionality order works: | permute | order | array format | | ------- | ----- | ------------------------ | | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) | | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) | | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) | | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) | | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) | | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) | In other words, the "permute" option changes the order in which nested for-loops over the array would be done. The algorithm below shows this more clearly, and may be executed as a python program: # mapidx = REMAP.shape2 xdim = 3 # SHAPE[mapidx].xdim_sz+1 ydim = 4 # SHAPE[mapidx].ydim_sz+1 zdim = 5 # SHAPE[mapidx].zdim_sz+1 lims = [xdim, ydim, zdim] idxs = [0,0,0] # starting indices order = [1,0,2] # experiment with different permutations, here offs = 0 # experiment with different offsets, here for idx in range(xdim * ydim * zdim): new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim print new_idx, for i in range(3): idxs[order[i]] = idxs[order[i]] + 1 if (idxs[order[i]] != lims[order[i]]): break print idxs[order[i]] = 0 Here, it is assumed that this algorithm be run within all pseudo-code throughout this document where a (parallelism) for-loop would normally run from 0 to VL-1 to refer to contiguous register elements; instead, where REMAP indicates to do so, the element index is run through the above algorithm to work out the **actual** element index, instead. Given that there are three possible SHAPE entries, up to three separate registers in any given operation may be simultaneously remapped: function op_add(rd, rs1, rs2) # add not VADD! ... ...  for (i = 0; i < VL; i++) if (predval & 1< Despite being a 98% complete and accurate topological remap of RVV concepts and functionality, no new instructions are needed. Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip becomes a critical dependency for efficient manipulation of predication masks (as a bit-field). Despite the removal of all operations, with the exception of CLIP and VSELECT.X *all instructions from RVV Base are topologically re-mapped and retain their complete functionality, intact*. Note that if RV64G ever had a MV.X added as well as FCLIP, the full functionality of RVV-Base would be obtained in SV. Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard equivalents, so are left out of Simple-V. VSELECT could be included if there existed a MV.X instruction in RV (MV.X is a hypothetical non-immediate variant of MV that would allow another register to specify which register was to be copied). Note that if any of these three instructions are added to any given RV extension, their functionality will be inherently parallelised. With some exceptions, where it does not make sense or is simply too challenging, all RV-Base instructions are parallelised: * CSR instructions, whilst a case could be made for fast-polling of a CSR into multiple registers, or for being able to copy multiple contiguously addressed CSRs into contiguous registers, and so on, are the fundamental core basis of SV. If parallelised, extreme care would need to be taken. Additionally, CSR reads are done using x0, and it is *really* inadviseable to tag x0. * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are left as scalar. * LR/SC could hypothetically be parallelised however their purpose is single (complex) atomic memory operations where the LR must be followed up by a matching SC. A sequence of parallel LR instructions followed by a sequence of parallel SC instructions therefore is guaranteed to not be useful. Not least: the guarantees of a Multi-LR/SC would be impossible to provide if emulated in a trap. * EBREAK, NOP, FENCE and others do not use registers so are not inherently paralleliseable anyway. All other operations using registers are automatically parallelised. This includes AMOMAX, AMOSWAP and so on, where particular care and attention must be paid. Example pseudo-code for an integer ADD operation (including scalar operations). Floating-point uses fp csrs. function op_add(rd, rs1, rs2) # add not VADD!  int i, id=0, irs1=0, irs2=0;  predval = get_pred_val(FALSE, rd);  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;  for (i = 0; i < VL; i++) if (predval & 1< Branch operations use standard RV opcodes that are reinterpreted to be "predicate variants" in the instance where either of the two src registers are marked as vectors (active=1, vector=1). Note that the predication register to use (if one is enabled) is taken from the *first* src register, and that this is used, just as with predicated arithmetic operations, to mask whether the comparison operations take place or not. The target (destination) predication register to use (if one is enabled) is taken from the *second* src register. If either of src1 or src2 are scalars (whether by there being no CSR register entry or whether by the CSR entry specifically marking the register as "scalar") the comparison goes ahead as vector-scalar or scalar-vector. In instances where no vectorisation is detected on either src registers the operation is treated as an absolutely standard scalar branch operation. Where vectorisation is present on either or both src registers, the branch may stil go ahead if any only if *all* tests succeed (i.e. excluding those tests that are predicated out). Note that when zero-predication is enabled (from source rs1), a cleared bit in the predicate indicates that the result of the compare is set to "false", i.e. that the corresponding destination bit (or result)) be set to zero. Contrast this with when zeroing is not set: bits in the destination predicate are only *set*; they are **not** cleared. This is important to appreciate, as there may be an expectation that, going into the hardware-loop, the destination predicate is always expected to be set to zero: this is **not** the case. The destination predicate is only set to zero if **zeroing** is enabled. Note that just as with the standard (scalar, non-predicated) branch operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting src1 and src2. In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given for predicated compare operations of function "cmp": for (int i=0; i There is no MV instruction in RV however there is a C.MV instruction. It is used for copying integer-to-integer registers (vectorised FMV is used for copying floating-point). If either the source or the destination register are marked as vectors C.MV is reinterpreted to be a vectorised (multi-register) predicated move operation. The actual instruction's format does not change: [[!table data=""" 15 12 | 11 7 | 6 2 | 1 0 | funct4 | rd | rs | op | 4 | 5 | 5 | 2 | C.MV | dest | src | C0 | """]] A simplified version of the pseudocode for this operation is as follows: function op_mv(rd, rs) # MV not VMV!  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;  ps = get_pred_val(FALSE, rs); # predication on src  pd = get_pred_val(FALSE, rd); # ... AND on dest  for (int i = 0, int j = 0; i < VL && j < VL;): if (int_csr[rs].isvec) while (!(ps & 1< An earlier draft of SV modified the behaviour of LOAD/STORE (modified the interpretation of the instruction fields). This actually undermined the fundamental principle of SV, namely that there be no modifications to the scalar behaviour (except where absolutely necessary), in order to simplify an implementor's task if considering converting a pre-existing scalar design to support parallelism. So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality do not change in SV, however just as with C.MV it is important to note that dual-predication is possible. In vectorised architectures there are usually at least two different modes for LOAD/STORE: * Read (or write for STORE) from sequential locations, where one register specifies the address, and the one address is incremented by a fixed amount. This is usually known as "Unit Stride" mode. * Read (or write) from multiple indirected addresses, where the vector elements each specify separate and distinct addresses. To support these different addressing modes, the CSR Register "isvector" bit is used. So, for a LOAD, when the src register is set to scalar, the LOADs are sequentially incremented by the src register element width, and when the src register is set to "vector", the elements are treated as indirection addresses. Simplified pseudo-code would look like this: function op_ld(rd, rs) # LD not VLD!  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;  ps = get_pred_val(FALSE, rs); # predication on src  pd = get_pred_val(FALSE, rd); # ... AND on dest  for (int i = 0, int j = 0; i < VL && j < VL;): if (int_csr[rs].isvec) while (!(ps & 1< C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated, where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register. It is therefore possible to use predicated C.LWSP to efficiently pop registers off the stack (by predicating x2 as the source), cherry-picking which registers to store to (by predicating the destination). Likewise for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved. The two modes ("unit stride" and multi-indirection) are still supported, as with standard LD/ST. Essentially, the only difference is that the use of x2 is hard-coded into the instruction. **Note**: it is still possible to redirect x2 to an alternative target register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as general-purpose LOAD/STORE operations. ## Compressed LOAD / STORE Instructions Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE, where the same rules apply and the same pseudo-code apply as for non-compressed LOAD/STORE. Again: setting scalar or vector mode on the src for LOAD and dest for STORE switches mode from "Unit Stride" to "Multi-indirection", respectively. # Element bitwidth polymorphism Element bitwidth is best covered as its own special section, as it is quite involved and applies uniformly across-the-board. SV restricts bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit. The effect of setting an element bitwidth is to re-cast each entry in the register table, and for all memory operations involving load/stores of certain specific sizes, to a completely different width. Thus In c-style terms, on an RV64 architecture, effectively each register now looks like this: typedef union { uint8_t b[8]; uint16_t s[4]; uint32_t i[2]; uint64_t l[1]; } reg_t; // integer table: assume maximum SV 7-bit regfile size reg_t int_regfile[128]; where the CSR Register table entry (not the instruction alone) determines which of those union entries is to be used on each operation, and the VL element offset in the hardware-loop specifies the index into each array. However a naive interpretation of the data structure above masks the fact that setting VL greater than 8, for example, when the bitwidth is 8, accessing one specific register "spills over" to the following parts of the register file in a sequential fashion. So a much more accurate way to reflect this would be: typedef union { uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128 uint8_t b[0]; // array of type uint8_t uint16_t s[0]; uint32_t i[0]; uint64_t l[0]; uint128_t d[0]; } reg_t; reg_t int_regfile[128]; where when accessing any individual regfile[n].b entry it is permitted (in c) to arbitrarily over-run the *declared* length of the array (zero), and thus "overspill" to consecutive register file entries in a fashion that is completely transparent to a greatly-simplified software / pseudo-code representation. It is however critical to note that it is clearly the responsibility of the implementor to ensure that, towards the end of the register file, an exception is thrown if attempts to access beyond the "real" register bytes is ever attempted. Now we may modify pseudo-code an operation where all element bitwidths have been set to the same size, where this pseudo-code is otherwise identical to its "non" polymorphic versions (above): function op_add(rd, rs1, rs2) # add not VADD! ... ...  for (i = 0; i < VL; i++) ... ... // TODO, calculate if over-run occurs, for each elwidth if (elwidth == 8) {    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +     int_regfile[rs2].i[irs2]; } else if elwidth == 16 {    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +     int_regfile[rs2].s[irs2]; } else if elwidth == 32 {    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +     int_regfile[rs2].i[irs2]; } else { // elwidth == 64    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +     int_regfile[rs2].l[irs2]; } ... ... So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers following sequentially on respectively from the same) are "type-cast" to 8-bit; for 16-bit entries likewise and so on. However that only covers the case where the element widths are the same. Where the element widths are different, the following algorithm applies: * Analyse the bitwidth of all source operands and work out the maximum. Record this as "maxsrcbitwidth" * If any given source operand requires sign-extension or zero-extension (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit sign-extension / zero-extension or whatever is specified in the standard RV specification, **change** that to sign-extending from the respective individual source operand's bitwidth from the CSR table out to "maxsrcbitwidth" (previously calculated), instead. * Following separate and distinct (optional) sign/zero-extension of all source operands as specifically required for that operation, carry out the operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV this may be a "null" (copy) operation, and that with FCVT, the changes to the source and destination bitwidths may also turn FVCT effectively into a copy). * If the destination operand requires sign-extension or zero-extension, instead of a mandatory fixed size (typically 32-bit for arithmetic, for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw etc.), overload the RV specification with the bitwidth from the destination register's elwidth entry. * Finally, store the (optionally) sign/zero-extended value into its destination: memory for sb/sw etc., or an offset section of the register file for an arithmetic operation. In this way, polymorphic bitwidths are achieved without requiring a massive 64-way permutation of calculations **per opcode**, for example (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible rd bitwidths). The pseudo-code is therefore as follows: typedef union { uint8_t b; uint16_t s; uint32_t i; uint64_t l; } el_reg_t; bw(elwidth): if elwidth == 0: return xlen if elwidth == 1: return xlen / 2 if elwidth == 2: return xlen * 2 // elwidth == 3: return 8 get_max_elwidth(rs1, rs2): return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set bw(int_csr[rs2].elwidth)) # again XLEN if no entry get_polymorphed_reg(reg, bitwidth, offset): el_reg_t res; res.l = 0; // TODO: going to need sign-extending / zero-extending if bitwidth == 8: reg.b = int_regfile[reg].b[offset] elif bitwidth == 16: reg.s = int_regfile[reg].s[offset] elif bitwidth == 32: reg.i = int_regfile[reg].i[offset] elif bitwidth == 64: reg.l = int_regfile[reg].l[offset] return res set_polymorphed_reg(reg, bitwidth, offset, val): if (!int_csr[reg].isvec): # sign/zero-extend depending on opcode requirements, from # the reg's bitwidth out to the full bitwidth of the regfile val = sign_or_zero_extend(val, bitwidth, xlen) int_regfile[reg].l[0] = val elif bitwidth == 8: int_regfile[reg].b[offset] = val elif bitwidth == 16: int_regfile[reg].s[offset] = val elif bitwidth == 32: int_regfile[reg].i[offset] = val elif bitwidth == 64: int_regfile[reg].l[offset] = val maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s) destwid = int_csr[rs1].elwidth # destination element width  for (i = 0; i < VL; i++) if (predval & 1< Polymorphic element widths in vectorised form means that the data being loaded (or stored) across multiple registers needs to be treated (reinterpreted) as a contiguous stream of elwidth-wide items, where the source register's element width is **independent** from the destination's. This makes for a slightly more complex algorithm when using indirection on the "addressed" register (source for LOAD and destination for STORE), particularly given that the LOAD/STORE instruction provides important information about the width of the data to be reinterpreted. Let's illustrate the "load" part, where the pseudo-code for elwidth=default was as follows, and i is the loop from 0 to VL-1: srcbase = ireg[rs+i]; return mem[srcbase + imm]; // returns XLEN bits Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide chunks are taken from the source memory location addressed by the current indexed source address register, and only when a full 32-bits-worth are taken will the index be moved on to the next contiguous source address register: bitwidth = bw(elwidth); // source elwidth from CSR reg entry elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8 srcbase = ireg[rs+i/(elsperblock)]; // integer divide offs = i % elsperblock; // modulo return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc. Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD and 128 for LQ. The principle is basically exactly the same as if the srcbase were pointing at the memory of the *register* file: memory is re-interpreted as containing groups of elwidth-wide discrete elements. When storing the result from a load, it's important to respect the fact that the destination register has its *own separate element width*. Thus, when each element is loaded (at the source element width), any sign-extension or zero-extension (or truncation) needs to be done to the *destination* bitwidth. Also, the storing has the exact same analogous algorithm as above, where in fact it is just the set\_polymorphed\_reg pseudocode (completely unchanged) used above. One issue remains: when the source element width is **greater** than the width of the operation, it is obvious that a single LB for example cannot possibly obtain 16-bit-wide data. This condition may be detected where, when using integer divide, elsperblock (the width of the LOAD divided by the bitwidth of the element) is zero. The issue is "fixed" by ensuring that elsperblock is a minimum of 1: elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth) The elements, if the element bitwidth is larger than the LD operation's size, will then be sign/zero-extended to the full LD operation size, as specified by the LOAD (LDU instead of LD, LBU instead of LB), before being passed on to the second phase. As LOAD/STORE may be twin-predicated, it is important to note that the rules on twin predication still apply, except where in previous pseudo-code (elwidth=default for both source and target) it was the *registers* that the predication was applied to, it is now the **elements** that the predication is applied to. Thus the full pseudocode for all LD operations may be written out as follows: function LBU(rd, rs): load_elwidthed(rd, rs, 8, true) function LB(rd, rs): load_elwidthed(rd, rs, 8, false) function LH(rd, rs): load_elwidthed(rd, rs, 16, false) ... ... function LQ(rd, rs): load_elwidthed(rd, rs, 128, false) # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16.. function load_memory(rs, imm, i, opwidth): elwidth = int_csr[rs].elwidth bitwidth = bw(elwidth); elsperblock = min(1, opwidth / bitwidth) srcbase = ireg[rs+i/(elsperblock)]; offs = i % elsperblock; return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes function load_elwidthed(rd, rs, opwidth, unsigned): destwid = int_csr[rd].elwidth # destination element width  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;  ps = get_pred_val(FALSE, rs); # predication on src  pd = get_pred_val(FALSE, rd); # ... AND on dest  for (int i = 0, int j = 0; i < VL && j < VL;): if (int_csr[rs].isvec) while (!(ps & 1<