# OPF ISA WG External RFC LS001 v2 14Sep2022 * RFC Author: Luke Kenneth Casson Leighton. * RFC Contributors/Ideas: Brad Frey, Paul Mackerras, Konstantinos Magritis, Cesar Strauss, Jacob Lifshay, Toshaan Bharvani, Dimitry Selyutin, Andrey Miroshnikov * Funded by NLnet under the Privacy and Enhanced Trust Programme, EU Horizon2020 Grant 825310 * [[ls001/discussion]] This proposal is to extend the Power ISA with an Abstract RISC-Paradigm Vectorisation Concept that may be orthogonally applied to **all and any** suitable Scalar instructions, present and future, in the Scalar Power ISA. The Vectorisation System is called ["Simple-V"](https://libre-soc.org/openpower/sv/) and the Prefix Format is called ["SVP64"](https://libre-soc.org/openpower/sv/). **Simple-V is not a Traditional Vector ISA and therefore does not add Vector opcodes or regfiles**. An ISA Concept similar to Simple-V was originally invented in 1994 by Peter Hsu (Architect of the MIPS R8000) but was dropped as MIPS did not have an Out-of-Order Microarchitecture at the time. Simple-V is designed for Embedded Scenarios right the way through Audio/Visual DSPs to 3D GPUs and Supercomputing. As it does **not** add actual Vector Instructions, relying solely and exclusively on the **Scalar** ISA, it is **Scalar** instructions that need to be added to the **Scalar** Power ISA before Simple-V may orthogonally Vectorise them. The goal of RED Semiconductor Ltd, an OpenPOWER Stakeholder, is to bring to market mass-volume general-purpose compute processors that are competitive in the 3D GPU Audio Visual DSP EDGE IoT desktop chromebook netbook smartphone laptop markets, performance-leveraged by Simple-V. To achieve this goal both Simple-V and accompanying Scalar** Power ISA instructions are needed. These include IEEE754 [Transcendentals](https://libre-soc.org/openpower/transcendentals/) [AV](https://libre-soc.org/openpower/sv/av_opcodes/) cryptographic [Biginteger](https://libre-soc.org/openpower/sv/biginteger/) and [bitmanipulation](https://libre-soc.org/openpower/sv/bitmanip) operations present in ARM Intel AMD and many other ISAs. Three additional FP-related sets are needed (missing from SFFS) - [int_fp_mv](https://libre-soc.org/openpower/sv/int_fp_mv/) [fclass](https://libre-soc.org/openpower/sv/fclass/) and [fcvt](https://libre-soc.org/openpower/sv/fcvt/) and one set named [crweird](https://libre-soc.org/openpower/sv/cr_int_predication/) increase the capability of CR Fields. *Thus as the primary motivation is to create a **Hybrid 3D CPU-GPU-VPU ISA** it becomes necesary to consider the Architectural Resource Allocation of not just Simple-V but the 80-100 Scalar instructions all at the same time*. It is also critical to note that Simple-V **does not modify the Scalar Power ISA**, that **only** Scalar words may be Vectorised, and that Vectorised instructions are **not** permitted to be different from their Scalar words (`addi` must use the same Word encoding as `sv.addi`, and any new Prefixed instruction added **must** also be added as Scalar). The sole semi-exception is Vectorised Branch Conditional, in order to provide the usual Advanced Branching capability present in every Commercial 3D GPU ISA, but it is the *Vectorised* Branch-Conditional that is augmented, not Scalar Branch. # Basic principle The inspiration for Simple-V came from the fact that on examination of every Vector ISA pseudocode encountered the Vector operations were expressed as a for-loop on a Scalar element operation, and then both a Scalar **and** a Vector instruction was added. With [Zero-Overhead Looping](https://en.m.wikipedia.org/wiki/Zero-overhead_looping) *already* being common for over four decades it felt natural to separate the looping at both the ISA and the Hardware Level and thus provide only Scalar instructions (instantly halving the number of instructions), but rather than go the VLIW route (TI MSP Series) keep closely to existing Power ISA standard Scalar execution. Thus the basic principle of Simple-V is to provide a Precise-Interruptible Zero-Overhead Loop system[^zolc] with associated register "offsetting" which augments a Suffixed instruction as a "template", incrementing the register numbering progressively *and automatically* each time round the "loop". Thus it may be considered to be a form of "Sub-Program-Counter" and at its simplest level can replace a large sequence of regularly-increasing loop-unrolled instructions with just two: one to set the Vector length and one saying where to start from in the regfile. On this sound and profoundly simple concept which leverages *Scalar* Micro-architectural capabilities much more comprehensive festures are easy to add, working up towards an ISA that easily matches the capability of powerful 3D GPU Vector Supercomputing ISAs, without ever adding even one single Vector opcode. # Extension Levels Simple-V has been subdivided into levels akin to the Power ISA Compliancy Levels. For now let us call them "SV Extension Levels" to differentiate the two. The reason for the [SV Extension Levels](https://libre-soc.org/openpower/sv/compliancy_levels/) is the same as for the Power ISA Compliancy Levels (SFFS, SFS): to not overburden implementors with features that they do not need. *There is no dependence between the two types of Levels*. The resources below therefore are not all required for all SV Extension Levels but they are all required to be reserved. # Binary Interoperability Power ISA has a reputation as being long-term stable. **Simple-V guarantees binary interoperability** by defining fixed register file bitwidths and size for a given set of instructions. The seduction of permitting different implementors to choose a register file bitwidth and size with the same instructions unfortunately has the catastrophic side-effect of introducing not only binary incompatibility but silent data corruption as well as no means to trap-and-emulate differing bitwidths.[^vsx256] "Silicon-Partner" Scalability is identical to attempting to run 64-bit Power ISA binaries without setting - or having `MSR.SF` - on "Scaled" 32-bit hardware: **the same opcodes** were shared between 32 and 64 bit. `RESERVED` space is thus crucial to have, in order to provide the **OPF ISA WG** - not implementors ("Silicon Partners") - with the option to properly review and decide any (if any) future expanded register file bitwidths and sizes[^msr], **under explicitly-distinguishable encodings** so as to guarantee long-term stability and binary interoperability. # Hardware Implementations The fundamental principle of Simple-V is that it sits between Issue and Decode, pausing the Program-Counter to service a "Sub-PC" hardware for-loop. This is very similar to [Zero-Overhead Loops](https://en.m.wikipedia.org/wiki/Zero-overhead_looping) in High-end DSPs (TI MSP Series). Considerable effort has been expended to ensure that Simple-V is practical to implement on an extremely wide range of Industry-wide common **Scalar** micro-architectures. Finite State Machine (for ultra-low-resource and Mission-Critical), In-order single-issue, all the way through to Great-Big Out-of-Order Superscalar Multi-Issue. The SV Extension Levels specifically recognise these differing scenarios. SIMD back-end ALUs particularly those with element-level predicate masks may be exploited to good effect with very little additional complexity to achieve high throughput, even on a single-issue in-order microarchitecture. As usually becomes quickly apparent with in-order, its limitations extend also to when Simple-V is deployed, which is why Multi-Issue Out-of-Order is the recommended (but not mandatory) Scalar Micro-architecture. Byte-level write-enable regfiles (like SRAMs) are strongly recommended, to avoid a Read-Modify-Write cycle. The only major concern is in the upper SV Extension Levels: the Hazard Management for increased number of Scalar Registers to 128 (in current versions) but given that IBM POWER9/10 has VSX register numbering 64, and modern GPUs have 128, 256 and even 512 registers this was deemed acceptable. Strategies do exist in hardware for Hazard Management of such large numbers of registers, even for Multi-Issue microarchitectures. # Simple-V Architectural Resources * No new Interrupt types are required. No modifications to existing Power ISA opcodes are required. No new Register Files are required (all because Simple-V is a category of Zero-Overhead Looping on Scalar instructions) * GPR FPR and CR Field Register extend to 128. A future version may extend to 256 or beyond[^extend] or also extend VSX[^futurevsx] * 24-bits are needed within the main SVP64 Prefix (equivalent to a 2-bit XO) * Another 24-bit (a second 2-bit XO) is needed for a planned future encoding, currently named "SVP64-Single"[^likeext001] * A third 24-bits (third 2-bit XO) is strongly recommended to be `RESERVED` such that future unforeseen capability is needed (although this may be alternatively achieved with a mandatory PCR or MSR bit) * To hold all Vector Context, four SPRs are needed. (Some 32/32-to-64 aliases are advantageous but not critical). * Five 6-bit XO (A-Form) "Management" instructions are needed. These are Scalar 32-bit instructions and *may* be 64-bit-extended in future (safely within the SVP64 space: no need for an EXT001 encoding). **Summary of Simple-V Opcode space** * 75% of one Major Opcode (equivalent to the rest of EXT017) * Five 6-bit XO 32-bit operations. No further opcode space *for Simple-V* is envisaged to be required for at least the next decade (including if added on VSX) **Simple-V SPRs** * **SVSTATE** - Vectorisation State sufficient for Precise-Interrupt Context-switching and no adverse latency, it may be considered to be a "Sub-PC" and as such absolutely must be treated with the same respect and priority as MSR and PC. * **SVSHAPE0-3** - these are 32-bit and may be grouped in pairs, they REMAP (shape) the Vectors[^svshape] * **SVLR** - again similar to LR for exactly the same purpose, SVSTATE is swapped with SVLR by SV-Branch-Conditional for exactly the same reason that NIA is swapped with LR **Vector Management Instructions** These fit into QTY 5of 6-bit XO 32-bit encoding (svshape and svshape2 share the same space): * **setvl** - Cray-style Scalar Vector Length instruction * **svstep** - used for Vertical-First Mode and for enquiring about internal state * **svremap** - "tags" registers for activating REMAP * **svshape** - convenience instruction for quickly setting up Matrix, DCT, FFT and Parallel Reduction REMAP * **svshape2** - additional convenience instruction to set up "Offset" REMAP (fits within svshape's XO encoding) * **svindex** - convenience instruction for setting up "Indexed" REMAP. \newpage{} # SVP64 24-bit Prefixes The SVP64 24-bit Prefix (RM) options aim to reduce instruction count and assembler complexity. These Modes do not interact with SVSTATE per se. SVSTATE primarily controls the looping (quantity, order), RM influences the *elements* (the Suffix). There is however some close interaction when it comes to predication. REMAP is outlined separately. * **element-width overrides**, which dynamically redefine each SFFS or SFS Scalar prefixed instruction to be 8-bit, 16-bit, 32-bit or 64-bit operands **without requiring new 8/16/32 instructions.**[^pseudorewrite] This results in full BF16 and FP16 opcodes being added to the Power ISA **without adding BF16 or FP16 opcodes** including full conversion between all formats. * **predication**. this is an absolutely essential feature for a 3D GPU VPU ISA. CR Fields are available as Predicate Masks hence the reason for their extension to 128. Twin-Predication is also provided: this may best be envisaged as back-to-back VGATHER-VSCATTER but is not restricted to LD/ST, its use saves on instruction count. Enabling one or other of the predicates provides all of the other types of operations found in Vector ISAs (VEXTRACT, VINSERT etc) again with no need to actually provide explicit such instructions. * **Saturation**. applies to **all** LD/ST and Arithmetic and Logical operations (without adding explicit saturation ops) * **Reduction and Prefix-Sum** (Fibonnacci Series) Modes, including a "Reverse Gear" (running loops backwards). * **vec2/3/4 "Packing" and "Unpacking"** (similar to VSX `vpack` and `vpkss`) accessible in a way that is easier than REMAP, added for the same reasons that drove `vpack` and `vpkss` etc. to be added: pixel, audio, and 3D data manipulation. With Pack/Unpack being part of SVSTATE it can be applied *in-place* saving register file space (no copy/mv needed). * **Load/Store "fault-first"** speculative behaviour, identical to SVE and RVV Fault-first: provides auto-truncation of a speculative sequential parallel LD/ST batch, helping solve the "SIMD Considered Harmful" stripmining problem from a Memory Access perspective. * **Data-Dependent Fail-First**: a 100% Deterministic extension of the LDST ffirst concept: first `Rc=1 BO test` failure terminates looping and truncates VL to that exact point. Useful for implementing algorithms such as `strcpy` in around 14 high-performance Vector instructions, the option exists to include or exclude the failing element. * **Predicate-result**: a strategic mode that effectively turns all and any operations into a type of `cmp`. An `Rc=1 BO test` is performed and if failing that element result is **not** written to the regfile. The `Rc=1` Vector of co-results **is** always written (subject to usual predication). Termed "predicate-result" because the combination of producing then testing a result is as if the test was in a follow-up predicated copy/mv operation, it reduces regfile pressure and instruction count. Also useful on saturated or other overflowing operations, the overflowing elements may be excluded from outputting to the regfile then post-analysed outside of critical hot-loops. **RM Modes** There are five primary categories of instructions in Power ISA, each of which needed slightly different Modes. For example, saturation and element-width overrides are meaningless to Condition Register Field operations, and Reduction is meaningless to LD/ST but Saturation saves register file ports in critical hot-loops. Thus the 24 bits may be suitably adapted to each category. * Normal - arithmetic and logical including IEEE754 FP * LD/ST immediate - includes element-strided and unit-strided * LD/ST indexed * CR Field ops * Branch-Conditional - saves on instruction count in 3D parallel if/else It does have to be pointed out that there is huge pressure on the Mode bits. There was therefore insufficient room, unlike the way that EXT001 was designed, to provide "identifying bits" *without first partially decoding the Suffix*. Some considerable care has been taken to ensure that Decoding may be performed in a strict forward-pipelined fashion that, aside from changes in SVSTATE (necessarily cached and propagated alongside MSR and PC) and aside from the initial 32/64 length detection (also kept simple), a Multi-Issue Engine would have no difficulty (performance maximisable). With the initial partial RM Mode type-identification decode performed above the Vector operations may then easily be passed downstream in a fully forward-progressive piplined fashion to independent parallel units for further analysis. **Vectorised Branch-Conditional** As mentioned in the introduction this is the one sole instruction group that is different pseudocode from its scalar equivalent. However even there its various Mode bits and options can be set such that in the degenerate case the behaviour becomes identical to Scalar Branch-Conditional. The two additional Modes within Vectorised Branch-Conditional, both of which may be combined, are `CTR-Mode` and `VLI-Test` (aka "Data Fail First"). CTR Mode extends the way that CTR may be decremented unconditionally within Scalar Branch-Conditional, and not only makes it conditional but also interacts with predication. VLI-Test provides the same option as Data-Dependent Fault-First to Deterministically truncate the Vector Length at the fail **or success** point. Boolean Logic rules on sets (treating the Vector of CR Fields to be tested by `BO` as a set) dictate that the Branch should take place on either 'ALL' tests succeeding (or failing) or whether 'SOME' tests succeed (or fail). These options provide the ability to cover the majority of Parallel 3D GPU Conditions, saving up to **twelve** instructions especially given the close interaction with CTR in hot-loops.[^parity] [^parity]: adding a parity (XOR) option was too much. instead a parallel-reduction on `crxor` may be used in combination with a Scalar Branch. Also `SVLR` is introduced, which is a parallel twin of `LR`, and saving and restoring of LR and SVLR may be deferred until the final decision as to whether to branch. In this way `sv.bclrl` does not corrupt `LR`. Vectorised Branch-Conditional due to its side-effects (e.g. reducing CTR or truncating VL) has practical uses even if the Branch is deliberately set to the next instruction (CIA+8). For example it may be used to reduce CTR by the number of bits set in a GPR, if that GPR is given as the predicate mask `sv.bc/pm=r3`. # LD/ST RM Modes Traditional Vector ISAs have vastly more (and more complex) addressing modes than Scalar ISAs: unit strided, element strided, Indexed, Structure Packing. All of these had to be jammed in on top of existing Scalar instructions **without modifying or adding new Scalar instructions**. A small conceptual "cheat" was therefore needed. The Immediate (D) is in some Modes multiplied by the element index, which gives us element-strided. For unit-strided the width of the operation (`ld`, 8 byte) is multiplied by the element index and *substituted* for "D" when the immediate, D, is zero. Modifications to support this "cheat" on top of pre-existing Scalar HDL (and Simulators) have both turned out to be minimal.[^mul] Also added was the option to perform signed or unsigned Effective Address calculation, which comes into play only on LD/ST Indexed, when elwidth overrides are used. Another quirk: `RA` is never allowed to have its width altered: it remains 64-bit, as it is the Base Address. One confusing thing is the unfortunate naming of LD/ST Indexed and REMAP Indexed: some care is taken in the spec to discern the two. LD/ST Indexed is Scalar `EA=RA+RB` (where **either** RA or RB may be marked as Vectorised), where obviously the order in which that Vector of RA (or RB) is read in the usual linear sequential fashion. REMAP Indexed affects the **order** in which the Vector of RA (or RB) is accessed, according to a schedule determined by *another* vector of offsets in the register file. Effectively this combines VSX `vperm` back-to-back with LD/ST operations *in the calculation of each Effective Address* in one instruction. For DCT and FFT, normally it is very expensive to perform the "bit-inversion" needed for address calculation and/or reordering of elements. DCT in particular needs both bit-inversion *and Gray-Coding* offsets (a complexity that often "justifies" full assembler loop-unrolling). DCT/FFT REMAP **automatically** performs the required offset adjustment to get data loaded and stored in the required order. Matrix REMAP can likewise perform up to 3 Dimensions of reordering (on both Immediate and Indexed), and when combined with vec2/3/4 the reordering can even go as far as four dimensions (four nested fixed size loops). Twin Predication is worth a special mention. Many Vector ISAs have special LD/ST `VCOMPRESS` and `VREDUCE` instructions, which sequentially skip elements based on predicate mask bits. They also add special `VINSERT` and `VEXTRACT` Register-based instructions to compensate for lack of single-element LD/ST (where in Simple-V you just use Scalar LD/ST). Also Broadcasting (`VSPLAT`) is either added to LDST or as Register-based. *All of the above modes are covered by Twin-Predication* In particular, a special predicate mode `1< ## LD/ST-Multi Context-switching saving and restoring of registers on the stack often requires explicit loop-unrolling to achieve effectively. In SVP64 it is possible to use a Predicate Mask to "compact" or "expand" a swathe of desired registers, dynamically. Known as "VCOMPRESS" and "VEXPAND", runtime-configurable LD/ST-Multi is achievable with 2 instructions. ``` # load 64 registers off the stack, in-order, skipping unneeded ones # by using CR0-CR63's "EQ" bits to select only those needed. setvli 64 sv.ld/sm=EQ *rt,0(ra) ``` ## Twin-Predication, re-entrant This example demonstrates two key concepts: firstly Twin-Predication (separate source predicate mask from destination predicate mask) and that sufficient state is stored within the Vector Context SPR, SVSTATE, for full re-entrancy on a Context Switch or function call *even if in the middle of executing a loop*. Also demonstrates that it is permissible for a programmer to write **directly** to the SVSTATE SPR, and still expect Deterministic Behaviour. It's not exactly recommended (performance may be impacted by direct SVSTATE access), but it is not prohibited either. ``` 292 # checks that we are able to resume in the middle of a VL loop, 293 # after an interrupt, or after the user has updated src/dst step 294 # let's assume the user has prepared src/dst step before running this 295 # vector instruction 296 # test_intpred_reentrant 297 # reg num 0 1 2 3 4 5 6 7 8 9 10 11 12 298 # srcstep=1 v 299 # src r3=0b0101 Y N Y N 300 # : | 301 # + - - + | 302 # : +-------+ 303 # : | 304 # dest ~r3=0b1010 N Y N Y 305 # dststep=2 ^ 306 307 sv.extsb/sm=r3/dm=~r3 *5, *9 ``` ## Matrix Multiply Matrix Multiply of any size (non-power-2) up to a total of 127 operations is achievable with only three instructions. Normally in any other SIMD ISA at least one source requires Transposition and often massive rolling repetition of data is required. These 3 instructions may be used as the "inner triple-loop kernel" of the usual 6-loop Massive Matrix Multiply. ``` 28 # test_sv_remap1 5x4 by 4x3 matrix multiply 29 svshape 5, 4, 3, 0, 0 30 svremap 31, 1, 2, 3, 0, 0, 0 31 sv.fmadds *0, *8, *16, *0 ``` ## Parallel Reduction Parallel (Horizontal) Reduction is often deeply problematic in SIMD and Vector ISAs. Parallel Reduction is Fully Deterministic in Simple-V and thus may even usefully be deployed on non-associative and non-commutative operations. ``` 75 # test_sv_remap2 76 svshape 7, 0, 0, 7, 0 77 svremap 31, 1, 0, 0, 0, 0, 0 # different order 78 sv.subf *0, *8, *16 ``` \newpage{} ## DCT DCT has dozens of uses in Audio-Visual processing and CODECs. A full 8-wide in-place triple-loop Inverse DCT may be achieved in 8 instructions. Expanding this to 16-wide is a matter of setting `svshape 16` **and the same instructions used**. Lee Composition may be deployed to construct non-power-two DCTs. The cosine table may be computed (once) with 18 Vector instructions (one of them `fcos`) ``` 1014 # test_sv_remap_fpmadds_ldbrev_idct_8_mode_4 1015 # LOAD bit-reversed with half-swap 1016 svshape 8, 1, 1, 14, 0 1017 svremap 1, 0, 0, 0, 0, 0, 0 1018 sv.lfs/els *0, 4(1) 1019 # Outer butterfly, iterative sum 1020 svremap 31, 0, 1, 2, 1, 0, 1 1021 svshape 8, 1, 1, 11, 0 1022 sv.fadds *0, *0, *0 1023 # Inner butterfly, twin +/- MUL-ADD-SUB 1024 svshape 8, 1, 1, 10, 0 1025 sv.ffmadds *0, *0, *0, *8 ``` ## 3D GPU style "Branch Conditional" (*Note: Specification is ready, Simulator still under development of full specification capabilities*) This example demonstrates a 2-long Vector Branch-Conditional only succeeding if *all* elements in the Vector are successful. This avoids the need for additional instructions that would need to perform a Parallel Reduction of a Vector of Condition Register tests down to a single value, on which a Scalar Branch-Conditional could then be performed. Full Rationale at ``` 80 # test_sv_branch_cond_all 81 for i in [7, 8, 9]: 83 addi 1, 0, i+1 # set r1 to i 84 addi 2, 0, i # set r2 to i 85 cmpi cr0, 1, 1, 8 # compare r1 with 8 and store to cr0 86 cmpi cr1, 1, 2, 8 # compare r2 with 8 and store to cr1 87 sv.bc/all 12, *1, 0xc # bgt 0xc - branch if BOTH 88 # r1 AND r2 greater 8 to the nop below 89 addi 3, 0, 0x1234, # if tests fail this shouldn't execute 90 or 0, 0, 0 # branch target ``` ## Big-Integer Math Remarkably, `sv.adde` is inherently a big-integer Vector Add, using `CA` chaining between **Scalar** operations. Using Vector LD/ST and recalling that the first and last `CA` may be chained in and out of an entire **Vector**, unlimited-length arithmetic is possible. ``` 26 # test_sv_bigint_add 32 33 r3/r2: 0x0000_0000_0000_0001 0xffff_ffff_ffff_ffff + 34 r5/r4: 0x8000_0000_0000_0000 0x0000_0000_0000_0001 = 35 r1/r0: 0x8000_0000_0000_0002 0x0000_0000_0000_0000 36 37 sv.adde *0, *2, *4 ``` A 128/64-bit shift may be used as a Vector shift by a Scalar amount, by merging two 64-bit consecutive registers in succession. ``` 62 # test_sv_bigint_scalar_shiftright(self): 64 65 r3 r2 r1 r4 66 0x0000_0000_0000_0002 0x8000_8000_8000_8001 0xffff_ffff_ffff_ffff >> 4 67 0x0000_0000_0000_0002 0x2800_0800_0800_0800 0x1fff_ffff_ffff_ffff 68 69 sv.dsrd *0,*1,4,1 ``` Additional 128/64 Mul and Div/Mod instructions may similarly be exploited to perform roll-over in arbitrary-length arithmetic: effectively they use one of the two 64-bit output registers as a form of "64-bit Carry In-Out". All of these big-integer instructions are Scalar instructions standing on their own merit and may be utilised even in a Scalar environment to improve performance. When used with Simple-V they may also be used to improve performance and also greatly simplify unlimited-length biginteger algorithms. [[!tag opf_rfc]] [^zolc]: first introduced in DSPs, Zero-Overhead Loops are astoundingly effective in reducing total number of instructions executed or needed. [ZOLC](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.301.4646&rep=rep1&type=pdf) reduces instructions by **25 to 80 percent**. [^msr]: an MSR bit or bits, conceptually equivalent to `MSR.SF` and added for the same reasons, would suffice perfectly. [^extend]: Prefix opcode space (or MSR bits) **must** be reserved in advance to do so, in order to avoid the catastrophic binary-incompatibility mistake made by RISC-V RVV and ARM SVE/2 [^likeext001]: SVP64-Single is remarkably similar to the "bit 1" of EXT001 being set to indicate that the 64-bits is to be allocated in full to a new encoding, but in fact SVP64-single still embeds v3.0 Scalar operations. [^pseudorewrite]: elwidth overrides does however mean that all SFS / SFFS pseudocode will need rewriting to be in terms of XLEN. This has the indirect side-effect of automatically making a 32-bit Scalar Power ISA Specification possible, as well as a future 128-bit one (Cross-reference: RISC-V RV32 and RV128 [^only2]: reminder that this proposal only needs 75% of two POs for Scalar instructions. The rest of EXT200-263 is for general use. [^ext001]: Recall that EXT100 to EXT163 is for Public v3.1 64-bit-augmented Operations prefixed by EXT001, for which, from Section 1.6.3, bit 6 is set to 1. This concept is where the above scheme originated. Section 1.6.3 uses the term "defined word" to refer to pre-existing EXT000-EXT063 32-bit instructions so prefixed to create the new numbering EXT100-EXT163, respectively [^futurevsx]: A future version or other Stakeholder *may* wish to drop Simple-V onto VSX: this would be a separate RFC [^vsx256]: imagine a hypothetical future VSX-256 using the exact same instructions as VSX. the binary incompatibility introducrd would catastrophically **and retroactively** damage existing IBM POWER8,9,10 hardware's reputation and that of Power ISA overall. [^autovec]: Compiler auto-vectorisation for best exploitation of SIMD and Vector ISAs on Scalar programming languages (c, c++) is an Indusstry-wide known-hard decades-long problem. Cross-reference the number of hand-optimised assembler algorithms. [^hphint]: intended for use when the compiler has determined the extent of Memory or register aliases in loops: `a[i] += a[i+4]` would necessitate a Vertical-First hphint of 4 [^svshape]: although SVSHAPE0-3 should, realistically, be regarded as high a priority as SVSTATE, and given corresponding SVSRR and SVLR equivalents, it was felt that having to context-switch **five** SPRs on Interrupts and function calls was too much. [^whoops]: two efforts were made to mix non-uniform encodings into Simple-V space: one deliberate to see how it would go, and one accidental. They both went extremely badly, the deliberate one costing over two months to add then remove. [^mul]: Setting this "multiplier" to 1 clearly leaves pre-existing Scalar behaviour completely intact as a degenerate case. [^ldstcisc]: At least the CISC "auto-increment" modes are not present, from the CDC 6600 and Motorola 68000! although these would be fun to introduce they do unfortunately make for 3-in 3-out register profiles, all 64-bit, which explains why the 6600 and 68000 had separate special dedicated address regfiles.