# Simple-V (Parallelism Extension Proposal) Specification * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton * Status: DRAFTv0.6.1 * Last edited: 10 sep 2019 * Ancillary resource: [[opcodes]] * Ancillary resource: [[sv_prefix_proposal]] * Ancillary resource: [[abridged_spec]] * Ancillary resource: [[vblock_format]] * Ancillary resource: [[appendix]] Authors/Contributors: * Luke Kenneth Casson Leighton * Allen Baum * Bruce Hoult * comp.arch * Jacob Bachmeyer * Guy Lemurieux * Jacob Lifshay * Terje Mathisen * The RISC-V Founders, without whom this all would not be possible. [[!toc ]] # Summary and Background: Rationale Simple-V is a uniform parallelism API for RISC-V hardware that has several unplanned side-effects including code-size reduction, expansion of HINT space and more. The reason for creating it is to provide a manageable way to turn a pre-existing design into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing the implementor to focus on adding hardware where it is needed and necessary. The primary target is for mobile-class 3D GPUs and VPUs, with secondary goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency. Critically: **No new instructions are added**. The parallelism (if any is implemented) is implicitly added by tagging *standard* scalar registers for redirection. When such a tagged register is used in any instruction, it indicates that the PC shall **not** be incremented; instead a loop is activated where *multiple* instructions are issued to the pipeline (as determined by a length CSR), with contiguously incrementing register numbers starting from the tagged register. When the last "element" has been reached, only then is the PC permitted to move on. Thus Simple-V effectively sits (slots) *in between* the instruction decode phase and the ALU(s). The barrier to entry with SV is therefore very low. The minimum compliant implementation is software-emulation (traps), requiring only the CSRs and CSR tables, and that an exception be thrown if an instruction's registers are detected to have been tagged. The looping that would otherwise be done in hardware is thus carried out in software, instead. Whilst much slower, it is "compliant" with the SV specification, and may be suited for implementation in RV32E and also in situations where the implementor wishes to focus on certain aspects of SV, without unnecessary time and resources into the silicon, whilst also conforming strictly with the API. A good area to punt to software would be the polymorphic element width capability for example. Hardware Parallelism, if any, is therefore added at the implementor's discretion to turn what would otherwise be a sequential loop into a parallel one. To emphasise that clearly: Simple-V (SV) is *not*: * A SIMD system * A SIMT system * A Vectorisation Microarchitecture * A microarchitecture of any specific kind * A mandatory parallel processor microarchitecture of any kind * A supercomputer extension SV does **not** tell implementors how or even if they should implement parallelism: it is a hardware "API" (Application Programming Interface) that, if implemented, presents a uniform and consistent way to *express* parallelism, at the same time leaving the choice of if, how, how much, when and whether to parallelise operations **entirely to the implementor**. # Basic Operation The principle of SV is as follows: * Standard RV instructions are "prefixed" (extended) through a 48/64 bit format (single instruction option) or a variable length VLIW-like prefix (multi or "grouped" option). * The prefix(es) indicate which registers are "tagged" as "vectorised". Predicates can also be added, and element widths overridden on any src or dest register. * A "Vector Length" CSR is set, indicating the span of any future "parallel" operations. * If any operation (a **scalar** standard RV opcode) uses a register that has been so "marked" ("tagged"), a hardware "macro-unrolling loop" is activated, of length VL, that effectively issues **multiple** identical instructions using contiguous sequentially-incrementing register numbers, based on the "tags". * **Whether they be executed sequentially or in parallel or a mixture of both or punted to software-emulation in a trap handler is entirely up to the implementor**. In this way an entire scalar algorithm may be vectorised with the minimum of modification to the hardware and to compiler toolchains. To reiterate: **There are *no* new opcodes**. The scheme works *entirely* on hidden context that augments *scalar* RISCV instructions. # CSRs * An optional "reshaping" CSR key-value table which remaps from a 1D linear shape to 2D or 3D, including full transposition. There are five additional CSRs, available in any privilege level: * MVL (the Maximum Vector Length) * VL (sets which scalar register is to be the Vector Length) * SUBVL (effectively a kind of SIMD) * STATE (containing copies of MVL, VL and SUBVL as well as context information) * SVPSTATE (state information for SVPrefix) * PCVBLK (the current operation being executed within a VBLOCK Group) For User Mode there are the following CSRs: * uePCVBLK (a copy of the sub-execution Program Counter, that is relative to the start of the current VBLOCK Group, set on a trap). * ueSTATE (useful for saving and restoring during context switch, and for providing fast transitions) * ueSVPSTATE when SVPrefix is implemented Note: ueSVPSTATE is mirrored in the top 32 bits of ueSTATE. There are also three additional CSRs for Supervisor-Mode: * sePCVBLK * seSTATE (which contains seSVPSTATE) * seSVPSTATE And likewise for M-Mode: * mePCVBLK * meSTATE (which contains meSVPSTATE) * meSVPSTATE The u/m/s CSRs are treated and handled exactly like their (x)epc equivalents. On entry to or exit from a privilege level, the contents of its (x)eSTATE are swapped with STATE. Thus for example, a User Mode trap will end up swapping STATE and ueSTATE (on both entry and exit), allowing User Mode traps to have their own Vectorisation Context set up, separated from and unaffected by normal user applications. If an M Mode trap occurs in the middle of the U Mode trap, STATE is swapped with meSTATE, and restored on exit: the U Mode trap continues unaware that the M Mode trap even occurred. Likewise, Supervisor Mode may perform context-switches, safe in the knowledge that its Vectorisation State is unaffected by User Mode. The access pattern for these groups of CSRs in each mode follows the same pattern for other CSRs that have M-Mode and S-Mode "mirrors": * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct. * In S-Mode, accessing and changing of the M-Mode CSRs is transparently identical to changing the S-Mode CSRs. Accessing and changing the U-Mode CSRs is permitted. * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs is prohibited. An interesting side effect of SV STATE being separate and distinct in S Mode is that Vectorised saving of an entire register file to the stack is a single instruction (through accidental provision of LOAD-MULTI semantics). If the SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a single standalone 64 bit opcode (P64 may set up SVPSTATE.SUBVL, SVPSTATE.VL and SVPSTATE.MVL from an immediate field, to cover the full regfile). It can even be predicated, which opens up some very interesting possibilities. (x)EPCVBLK CSRs must be treated exactly like their corresponding (x)epc equivalents. See VBLOCK section for details. ## MAXVECTORLENGTH (MVL) MAXVECTORLENGTH is the same concept as MVL in RVV, except that it is variable length and may be dynamically set. MVL is however limited to the regfile bitwidth XLEN (1-32 for RV32, 1-64 for RV64 and so on). The reason for setting this limit is so that predication registers, when marked as such, may fit into a single register as opposed to fanning out over several registers. This keeps the hardware implementation a little simpler. The other important factor to note is that the actual MVL is internally stored **offset by one**, so that it can fit into only 6 bits (for RV64) and still cover a range up to XLEN bits. Attempts to set MVL to zero will return an exception. This is expressed more clearly in the "pseudocode" section, where there are subtle differences between CSRRW and CSRRWI. ## Vector Length (VL) VL is very different from RVV's VL. It contains the scalar register *number* that is to be treated as the Vector Length. It is a sub-field of STATE. When set to zero (x0) VL (vectorisation) is disabled. Implementations realistically should keep a cached copy of the register pointed to by VL in the instruction issue and decode phases. Out of Order Engines must then, if it is not x0, add this register to Vectorised instruction Dependency Checking as an additional read/write hazard as appropriate. Setting VL via this CSR is very unusual. It should not normally be needed except when [[specification/sv.setvl]] is not implemented. Note that unlike in sv.setvl, setting VL does not change the contents of the scalar register that it points to, although if the scalar register's contents are not within the range of MVL at the time that VL is set, an illegal instruction exception must be raised. ## SUBVL - Sub Vector Length This is a "group by quantity" that effectively asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued. Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length and now comprises SUBVL discrete sub operations. This can be viewed as an inner SUBVL hardware for-loop within a VL hardware for-loop in effect, with the sub-element increased every time in the innermost loop. This is best illustrated in the (simplified) pseudocode example, in the [[appendix]]. The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied then stored, per VL element iteration, rather than having to set VL to three times larger. Setting this CSR to 0 must raise an exception. Setting it to a value greater than 4 likewise. To see the relationship with STATE, see below. The main effect of SUBVL is that predication bits are applied per **group**, rather than by individual element. This saves a not insignificant number of instructions when handling 3D vectors, as otherwise a much longer predicate mask would have to be set up with regularly-repeated bit patterns. See SUBVL Pseudocode illustration in the [[appendix]], for details. ## STATE out of date, see This is a standard CSR that contains sufficient information for a full context save/restore. It contains (and permits setting of): * MVL * VL * destoffs - the destination element offset of the current parallel instruction being executed * srcoffs - for twin-predication, the source element offset as well. * SUBVL * svdestoffs - the subvector destination element offset of the current parallel instruction being executed Interestingly STATE may hypothetically also be modified to make the immediately-following instruction to skip a certain number of elements, by playing with destoffs and srcoffs (and the subvector offsets as well) Setting destoffs and srcoffs is realistically intended for saving state so that exceptions (page faults in particular) may be serviced and the hardware-loop that was being executed at the time of the trap, from user-mode (or Supervisor-mode), may be returned to and continued from exactly where it left off. The reason why this works is because setting User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE and seSTATE). The format of the STATE CSR is as follows: | (31..28) | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) | | -------- | -------- | -------- | -------- | -------- | ------- | ------- | | rsvd | dsvoffs | subvl | destoffs | srcoffs | vl | maxvl | Legal values of vl are between 0 and 31. The relationship between SUBVL and the subvl field is: | SUBVL | (25..24) | | ----- | -------- | | 1 | 0b00 | | 2 | 0b01 | | 3 | 0b10 | | 4 | 0b11 | When setting this CSR, the following characteristics will be enforced: * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN * **VL** must be set to a scalar register between 0 and 31. * **SUBVL** which sets a SIMD-like quantity, has only 4 values so there are no changes needed * **srcoffs** will be truncated to be within the range 0 to VL-1 * **destoffs** will be truncated to be within the range 0 to VL-1 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1 NOTE: if the following instruction is not a twin predicated instruction, and destoffs or dsvoffs has been set to non-zero, subsequent execution behaviour is undefined. **USE WITH CARE**. NOTE: sub-vector looping does not require a twin-predicate corresponding index, because sub-vectors use the *main* (VL) loop predicate bit. When SVPrefix is implemented, it can have its own VL, MVL and SUBVL, as well as element offsets. SVSTATE.VL acts slightly differently in that it is no longer a pointer to a scalar register but is an actual value just like RVV's VL. The format of SVSTATE, which fits into *both* the top bits of STATE and also into a separate CSR, is as follows: | (31..28) | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) | | -------- | -------- | -------- | -------- | -------- | ------- | ------- | | rsvd | dsvoffs | subvl | destoffs | srcoffs | vl | maxvl | ### Hardware rules for when to increment STATE offsets The offsets inside STATE are like the indices in a loop, except in hardware. They are also partially (conceptually) similar to a "sub-execution Program Counter". As such, and to allow proper context switching and to define correct exception behaviour, the following rules must be observed: * When the VL CSR is set, srcoffs and destoffs are reset to zero. * Each instruction that contains a "tagged" register shall start execution at the *current* value of srcoffs (and destoffs in the case of twin predication) * Unpredicated bits (in nonzeroing mode) shall cause the element operation to skip, incrementing the srcoffs (or destoffs) * On execution of an element operation, Exceptions shall **NOT** cause srcoffs or destoffs to increment. * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs = VL-1 after the last element is executed), both srcoffs and destoffs shall be reset to zero. This latter is why srcoffs and destoffs may be stored as values from 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to elements. srcoffs and destoffs never need to be set to VL: their maximum operating values are limited to 0 to VL-1. The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs. ## MVL and VL Pseudocode The pseudo-code for get and set of VL and MVL use the following internal functions as follows: set_mvl_csr(value, rd): STATE.MVL = MIN(value, STATE.MVL) get_mvl_csr(rd): regs[rd] = STATE.VL set_vl_csr(value, rd): STATE.VL = rd return STATE.VL get_vl_csr(rd): return STATE.VL Note that where setting MVL behaves as a normal CSR (returns the old value), unlike standard CSR behaviour, setting VL will return the **new** value of VL **not** the old one. For CSRRWI, the range of the immediate is restricted to 5 bits. In order to maximise the effectiveness, an immediate of 0 is used to set VL=1, an immediate of 1 is used to set VL=2 and so on: CSRRWI_Set_MVL(value): set_mvl_csr(value+1, x0) CSRRWI_Set_VL(value): set_vl_csr(value+1, x0) However for CSRRW the following pseudocode is used for MVL and VL, where setting the value to zero will cause an exception to be raised. The reason is that if VL or MVL are set to zero, the STATE CSR is not capable of storing that value. CSRRW_Set_MVL(rs1, rd): value = regs[rs1] if value == 0 or value > XLEN: raise Exception set_mvl_csr(value, rd) CSRRW_Set_VL(rs1, rd): value = regs[rs1] if value == 0 or value > XLEN: raise Exception set_vl_csr(value, rd) In this way, when CSRRW is utilised with a loop variable, the value that goes into VL (and into the destination register) may be used in an instruction-minimal fashion: CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt} CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt} CSRRWI MVL, 3 # sets MVL == **4** (not 3) j zerotest # in case loop counter a0 already 0 loop: CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0) ld a3, a1 # load 4 registers a3-6 from x slli t1, t0, 3 # t1 = vl * 8 (in bytes) ld a7, a2 # load 4 registers a7-10 from y add a1, a1, t1 # increment pointer to x by vl*8 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y) sub a0, a0, t0 # n -= vl (t0) st a7, a2 # store 4 registers a7-10 to y add a2, a2, t1 # increment pointer to y by vl*8 zerotest: bnez a0, loop # repeat if n != 0 With the STATE CSR, just like with CSRRWI, in order to maximise the utilisation of the limited bitspace, "000000" in binary represents VL==1, "00001" represents VL==2 and so on (likewise for MVL): CSRRW_Set_SV_STATE(rs1, rd): value = regs[rs1] get_state_csr(rd) STATE.MVL = set_mvl_csr(value[11:6]+1) STATE.VL = set_vl_csr(value[5:0]+1) STATE.destoffs = value[23:18]>>18 STATE.srcoffs = value[23:18]>>12 get_state_csr(rd): regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 | (STATE.destoffs)<<18 return regs[rd] In both cases, whilst CSR read of VL and MVL return the exact values of VL and MVL respectively, reading and writing the STATE CSR returns those values **minus one**. This is absolutely critical to implement if the STATE CSR is to be used for fast context-switching. ## VL, MVL and SUBVL instruction aliases This table contains pseudo-assembly instruction aliases. Note the subtraction of 1 from the CSRRWI pseudo variants, to compensate for the reduced range of the 5 bit immediate. | alias | CSR | | - | - | | SETVL rd, rs | CSRRW VL, rd, rs | | SETVLi rd, #n | CSRRWI VL, rd, #n-1 | | GETVL rd | CSRRW VL, rd, x0 | | SETMVL rd, rs | CSRRW MVL, rd, rs | | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 | | GETMVL rd | CSRRW MVL, rd, x0 | Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure). ## Register key-value (CAM) table *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VBLOCK instruction format. Note that this table does *not* get applied to the SVPrefix P48/64 format, only to scalar opcodes* The purpose of the Register table is three-fold: * To mark integer and floating-point registers as requiring "redirection" if it is ever used as a source or destination in any given operation. This involves a level of indirection through a 5-to-7-bit lookup table, such that **unmodified** operands with 5 bits (3 for some RVC ops) may access up to **128** registers. * To indicate whether, after redirection through the lookup table, the register is a vector (or remains a scalar). * To over-ride the implicit or explicit bitwidth that the operation would normally give the register. Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15) and the Register table contains entried that only refer to registerd x1-x14 or x16-x31, such operations will *never* activate the VL hardware loop! If however the (16 bit) Register table does contain such an entry (x8-x15 or x2 in the case of LWSP), that src or dest reg may be redirected anywhere to the *full* 128 register range. Thus, RVC becomes far more powerful and has many more opportunities to reduce code size that in Standard RV32/RV64 executables. [[!inline raw="yes" pages="simple_v_extension/reg_table_format" ]] i/f is set to "1" to indicate that the redirection/tag entry is to be applied to integer registers; 0 indicates that it is relevant to floating-point registers. The 8 bit format is used for a much more compact expression. "isvec" is implicit and, similar to [[sv_prefix_proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode. Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc., and thus the "vector" mode need only shift the (6 bit) regnum by 1 to get the actual (7 bit) register number to use, there is not enough space in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required. vew has the following meanings, indicating that the instruction's operand size is "over-ridden" in a polymorphic fashion: | vew | bitwidth | | --- | ------------------- | | 00 | default (XLEN/FLEN) | | 01 | 8 bit | | 10 | 16 bit | | 11 | 32 bit | As the above table is a CAM (key-value store) it may be appropriate (faster, implementation-wise) to expand it as follows: [[!inline raw="yes" pages="simple_v_extension/reg_table" ]] ## Predication Table *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VBLOCK instruction format. The table does **not** apply to SVPrefix opcodes* The Predication Table is a key-value store indicating whether, if a given destination register (integer or floating-point) is referred to in an instruction, it is to be predicated. Like the Register table, it is an indirect lookup that allows the RV opcodes to not need modification. It is particularly important to note that the *actual* register used can be *different* from the one that is in the instruction, due to the redirection through the lookup table. * regidx is the register that in combination with the i/f flag, if that integer or floating-point register is referred to in a (standard RV) instruction results in the lookup table being referenced to find the predication mask to use for this operation. * predidx is the *actual* (full, 7 bit) register to be used for the predication mask. * inv indicates that the predication mask bits are to be inverted prior to use *without* actually modifying the contents of the register from which those bits originated. * zeroing is either 1 or 0, and if set to 1, the operation must place zeros in any element position where the predication mask is set to zero. If zeroing is set to 0, unpredicated elements *must* be left alone. Some microarchitectures may choose to interpret this as skipping the operation entirely. Others which wish to stick more closely to a SIMD architecture may choose instead to interpret unpredicated elements as an internal "copy element" operation (which would be necessary in SIMD microarchitectures that perform register-renaming) * ffirst is a special mode that stops sequential element processing when a data-dependent condition occurs, whether a trap or a conditional test. The handling of each (trap or conditional test) is slightly different: see Instruction sections for further details [[!inline raw="yes" pages="simple_v_extension/pred_table_format" ]] The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit format is very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication, in the same fashion as described above. The 16 bit Predication CSR Table is a key-value store, so implementation-wise it will be faster to turn the table around (maintain topologically equivalent state). Opportunities then exist to access registers in unary form instead of binary, saving gates and power by only activating "redirection" with a single AND gate, instead of multiple multi-bit XORs (a CAM): [[!inline raw="yes" pages="simple_v_extension/pred_table" ]] So when an operation is to be predicated, it is the internal state that is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following pseudo-code for operations is given, where p is the explicit (direct) reference to the predication register to be used: for (int i=0; i ffirst is a special data-dependent predicate mode. There are two variants: one is for faults: typically for LOAD/STORE operations, which may encounter end of page faults during a series of operations. The other variant is comparisons such as FEQ (or the augmented behaviour of Branch), and any operation that returns a result of zero (whether integer or floating-point). In the FP case, this includes negative-zero. ffirst interacts with zero- and non-zero predication. In non-zeroing mode, masked-out operations are simply excluded from testing (can never fail). However for fail-comparisons (not faults) in zeroing mode, the result will be zero: this *always* "fails", thus on the very first masked-out element ffirst will always terminate. Note that ffirst mode works because the execution order must "appear" to be (in "program order"). An in-order architecture must execute the element operations in sequence, whilst an out-of-order architecture must *commit* the element operations in sequence and cancel speculatively-executed ones (giving the appearance of in-order execution). Note also, that if ffirst mode is needed without predication, a special "always-on" Predicate Table Entry may be constructed by setting inverse-on and using x0 as the predicate register. This will have the effect of creating a mask of all ones, allowing ffirst to be set. See [[appendix]] for more details on fail-on-first modes, as well as pseudo-code, below. ## REMAP and SHAPE CSRs See optional [[remap]] section. # Instruction Execution Order Simple-V behaves as if it is a hardware-level "macro expansion system", substituting and expanding a single instruction into multiple sequential instructions with contiguous and sequentially-incrementing registers. As such, it does **not** modify - or specify - the behaviour and semantics of the execution order: that may be deduced from the **existing** RV specification in each and every case. So for example if a particular micro-architecture permits out-of-order execution, and it is augmented with Simple-V, then wherever instructions may be out-of-order then so may the "post-expansion" SV ones. If on the other hand there are memory guarantees which specifically prevent and prohibit certain instructions from being re-ordered (such as the Atomicity Axiom, or FENCE constraints), then clearly those constraints **MUST** also be obeyed "post-expansion". It should be absolutely clear that SV is **not** about providing new functionality or changing the existing behaviour of a micro-architetural design, or about changing the RISC-V Specification. It is **purely** about compacting what would otherwise be contiguous instructions that use sequentially-increasing register numbers down to the **one** instruction. # Instructions See [[appendix]] for specific cases where instruction behaviour is augmented. A greatly simplified example is below. Note that this is the ADD implementation, not a separate VADD instruction: [[!inline raw="yes" pages="simple_v_extension/simple_add_example" ]] Note that several things have been left out of this example. See [[appendix]] for additional examples that show how to add support for additional features (twin predication, elwidth, zeroing, SUBVL etc.) Branches in particular have been transparently augmented to include "collation" of comparison results into a tagged register. # Exceptions Exceptions may occur at any time, in any given underlying scalar operation. This implies that context-switching (traps) may occur, and operation must be returned to where it left off. That in turn implies that the full state - including the current parallel element being processed - has to be saved and restored. This is what the **STATE** and **PCVBLK** CSRs are for. The implications are that all underlying individual scalar operations "issued" by the parallelisation have to appear to be executed sequentially. The further implications are that if two or more individual element operations are underway, and one with an earlier index causes an exception, it will be necessary for the microarchitecture to **discard** or terminate operations with higher indices. Optimisated microarchitectures could hypothetically store (cache) results, for subsequent replay if appropriate. In short: exception handling **MUST** be precise, in-order, and exactly like Standard RISC-V as far as the instruction execution order is concerned, regardless of whether it is PC, PCVBLK, VL or SUBVL that is currently being incremented. # Hints A "HINT" is an operation that has no effect on architectural state, where its use may, by agreed convention, give advance notification to the microarchitecture: branch prediction notification would be a good example. Usually HINTs are where rd=x0. With Simple-V being capable of issuing *parallel* instructions where rd=x0, the space for possible HINTs is expanded considerably. VL could be used to indicate different hints. In addition, if predication is set, the predication register itself could hypothetically be passed in as a *parameter* to the HINT operation. No specific hints are yet defined in Simple-V # Vector Block Format The VBLOCK Format allows Register, Predication and Vector Length to be contextually associated with a group of RISC-V scalar opcodes. The format is as follows: [[!inline raw="yes" pages="simple_v_extension/vblock_format_table" ]] For more details, including the CSRs, see ancillary resource: [[vblock_format]] # Under consideration See [[discussion]]