add important detection ArrayProxy and workaround in eq()
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedPipeline implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 eq:
5 --
6
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
10 Records.
11
12 Stage API:
13 ---------
14
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
18
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
23 sub-objects
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
29 function
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
36
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
41
42 StageChain:
43 ----------
44
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
48
49 RecordBasedStage:
50 ----------------
51
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
56 examples).
57
58 PassThroughStage:
59 ----------------
60
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
64
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
68
69 ControlBase:
70 -----------
71
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
75 ready/valid/data API.
76
77 UnbufferedPipeline:
78 ------------------
79
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedPipeline). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
84
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedPipeline by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
88 also having to stall.
89
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
93
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
97
98 RegisterPipeline:
99 ----------------
100
101 A convenience class that, because UnbufferedPipeline introduces a single
102 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
103 stage that, duh, delays its (unmodified) input by one clock cycle.
104
105 BufferedPipeline:
106 ----------------
107
108 nmigen implementation of buffered pipeline stage, based on zipcpu:
109 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
110
111 this module requires quite a bit of thought to understand how it works
112 (and why it is needed in the first place). reading the above is
113 *strongly* recommended.
114
115 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
116 the STB / ACK signals to raise and lower (on separate clocks) before
117 data may proceeed (thus only allowing one piece of data to proceed
118 on *ALTERNATE* cycles), the signalling here is a true pipeline
119 where data will flow on *every* clock when the conditions are right.
120
121 input acceptance conditions are when:
122 * incoming previous-stage strobe (p.i_valid) is HIGH
123 * outgoing previous-stage ready (p.o_ready) is LOW
124
125 output transmission conditions are when:
126 * outgoing next-stage strobe (n.o_valid) is HIGH
127 * outgoing next-stage ready (n.i_ready) is LOW
128
129 the tricky bit is when the input has valid data and the output is not
130 ready to accept it. if it wasn't for the clock synchronisation, it
131 would be possible to tell the input "hey don't send that data, we're
132 not ready". unfortunately, it's not possible to "change the past":
133 the previous stage *has no choice* but to pass on its data.
134
135 therefore, the incoming data *must* be accepted - and stored: that
136 is the responsibility / contract that this stage *must* accept.
137 on the same clock, it's possible to tell the input that it must
138 not send any more data. this is the "stall" condition.
139
140 we now effectively have *two* possible pieces of data to "choose" from:
141 the buffered data, and the incoming data. the decision as to which
142 to process and output is based on whether we are in "stall" or not.
143 i.e. when the next stage is no longer ready, the output comes from
144 the buffer if a stall had previously occurred, otherwise it comes
145 direct from processing the input.
146
147 this allows us to respect a synchronous "travelling STB" with what
148 dan calls a "buffered handshake".
149
150 it's quite a complex state machine!
151 """
152
153 from nmigen import Signal, Cat, Const, Mux, Module, Value
154 from nmigen.cli import verilog, rtlil
155 from nmigen.hdl.ast import ArrayProxy
156 from nmigen.hdl.rec import Record, Layout
157
158 from abc import ABCMeta, abstractmethod
159 from collections.abc import Sequence
160
161
162 class PrevControl:
163 """ contains signals that come *from* the previous stage (both in and out)
164 * i_valid: previous stage indicating all incoming data is valid.
165 may be a multi-bit signal, where all bits are required
166 to be asserted to indicate "valid".
167 * o_ready: output to next stage indicating readiness to accept data
168 * i_data : an input - added by the user of this class
169 """
170
171 def __init__(self, i_width=1):
172 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
173 self.o_ready = Signal(name="p_o_ready") # prev <<out self
174 self.i_data = None # XXX MUST BE ADDED BY USER
175
176 def _connect_in(self, prev):
177 """ internal helper function to connect stage to an input source.
178 do not use to connect stage-to-stage!
179 """
180 return [self.i_valid.eq(prev.i_valid),
181 prev.o_ready.eq(self.o_ready),
182 eq(self.i_data, prev.i_data),
183 ]
184
185 def i_valid_logic(self):
186 vlen = len(self.i_valid)
187 if vlen > 1: # multi-bit case: valid only when i_valid is all 1s
188 all1s = Const(-1, (len(self.i_valid), False))
189 return self.i_valid == all1s
190 # single-bit i_valid case
191 return self.i_valid
192
193
194 class NextControl:
195 """ contains the signals that go *to* the next stage (both in and out)
196 * o_valid: output indicating to next stage that data is valid
197 * i_ready: input from next stage indicating that it can accept data
198 * o_data : an output - added by the user of this class
199 """
200 def __init__(self):
201 self.o_valid = Signal(name="n_o_valid") # self out>> next
202 self.i_ready = Signal(name="n_i_ready") # self <<in next
203 self.o_data = None # XXX MUST BE ADDED BY USER
204
205 def connect_to_next(self, nxt):
206 """ helper function to connect to the next stage data/valid/ready.
207 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
208 use this when connecting stage-to-stage
209 """
210 return [nxt.i_valid.eq(self.o_valid),
211 self.i_ready.eq(nxt.o_ready),
212 eq(nxt.i_data, self.o_data),
213 ]
214
215 def _connect_out(self, nxt):
216 """ internal helper function to connect stage to an output source.
217 do not use to connect stage-to-stage!
218 """
219 return [nxt.o_valid.eq(self.o_valid),
220 self.i_ready.eq(nxt.i_ready),
221 eq(nxt.o_data, self.o_data),
222 ]
223
224
225 def eq(o, i):
226 """ makes signals equal: a helper routine which identifies if it is being
227 passed a list (or tuple) of objects, or signals, or Records, and calls
228 the objects' eq function.
229
230 complex objects (classes) can be used: they must follow the
231 convention of having an eq member function, which takes the
232 responsibility of further calling eq and returning a list of
233 eq assignments
234
235 Record is a special (unusual, recursive) case, where the input may be
236 specified as a dictionary (which may contain further dictionaries,
237 recursively), where the field names of the dictionary must match
238 the Record's field spec. Alternatively, an object with the same
239 member names as the Record may be assigned: it does not have to
240 *be* a Record.
241 """
242 if not isinstance(o, Sequence):
243 o, i = [o], [i]
244 res = []
245 for (ao, ai) in zip(o, i):
246 #print ("eq", ao, ai)
247 if isinstance(ao, Record):
248 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
249 if isinstance(field_shape, Layout):
250 val = ai.fields
251 else:
252 val = ai
253 if hasattr(val, field_name): # check for attribute
254 val = getattr(val, field_name)
255 else:
256 val = val[field_name] # dictionary-style specification
257 rres = eq(ao.fields[field_name], val)
258 res += rres
259 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
260 for p in ai.ports():
261 op = getattr(ao, p.name)
262 print (op, p, p.name)
263 rres = op.eq(p)
264 if not isinstance(rres, Sequence):
265 rres = [rres]
266 res += rres
267 else:
268 rres = ao.eq(ai)
269 if not isinstance(rres, Sequence):
270 rres = [rres]
271 res += rres
272 return res
273
274
275 class StageCls(metaclass=ABCMeta):
276 """ Class-based "Stage" API. requires instantiation (after derivation)
277
278 see "Stage API" above.. Note: python does *not* require derivation
279 from this class. All that is required is that the pipelines *have*
280 the functions listed in this class. Derivation from this class
281 is therefore merely a "courtesy" to maintainers.
282 """
283 @abstractmethod
284 def ispec(self): pass # REQUIRED
285 @abstractmethod
286 def ospec(self): pass # REQUIRED
287 #@abstractmethod
288 #def setup(self, m, i): pass # OPTIONAL
289 @abstractmethod
290 def process(self, i): pass # REQUIRED
291
292
293 class Stage(metaclass=ABCMeta):
294 """ Static "Stage" API. does not require instantiation (after derivation)
295
296 see "Stage API" above. Note: python does *not* require derivation
297 from this class. All that is required is that the pipelines *have*
298 the functions listed in this class. Derivation from this class
299 is therefore merely a "courtesy" to maintainers.
300 """
301 @staticmethod
302 @abstractmethod
303 def ispec(): pass
304
305 @staticmethod
306 @abstractmethod
307 def ospec(): pass
308
309 #@staticmethod
310 #@abstractmethod
311 #def setup(m, i): pass
312
313 @staticmethod
314 @abstractmethod
315 def process(i): pass
316
317
318 class RecordBasedStage(Stage):
319 """ convenience class which provides a Records-based layout.
320 honestly it's a lot easier just to create a direct Records-based
321 class (see ExampleAddRecordStage)
322 """
323 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
324 self.in_shape = in_shape
325 self.out_shape = out_shape
326 self.__process = processfn
327 self.__setup = setupfn
328 def ispec(self): return Record(self.in_shape)
329 def ospec(self): return Record(self.out_shape)
330 def process(seif, i): return self.__process(i)
331 def setup(seif, m, i): return self.__setup(m, i)
332
333
334 class StageChain(StageCls):
335 """ pass in a list of stages, and they will automatically be
336 chained together via their input and output specs into a
337 combinatorial chain.
338
339 the end result basically conforms to the exact same Stage API.
340
341 * input to this class will be the input of the first stage
342 * output of first stage goes into input of second
343 * output of second goes into input into third (etc. etc.)
344 * the output of this class will be the output of the last stage
345 """
346 def __init__(self, chain):
347 self.chain = chain
348
349 def ispec(self):
350 return self.chain[0].ispec()
351
352 def ospec(self):
353 return self.chain[-1].ospec()
354
355 def setup(self, m, i):
356 for (idx, c) in enumerate(self.chain):
357 if hasattr(c, "setup"):
358 c.setup(m, i) # stage may have some module stuff
359 o = self.chain[idx].ospec() # only the last assignment survives
360 m.d.comb += eq(o, c.process(i)) # process input into "o"
361 if idx != len(self.chain)-1:
362 ni = self.chain[idx+1].ispec() # becomes new input on next loop
363 m.d.comb += eq(ni, o) # assign output to next input
364 i = ni
365 self.o = o # last loop is the output
366
367 def process(self, i):
368 return self.o # conform to Stage API: return last-loop output
369
370
371 class ControlBase:
372 """ Common functions for Pipeline API
373 """
374 def __init__(self, in_multi=None):
375 """ Base class containing ready/valid/data to previous and next stages
376
377 * p: contains ready/valid to the previous stage
378 * n: contains ready/valid to the next stage
379
380 User must also:
381 * add i_data member to PrevControl (p) and
382 * add o_data member to NextControl (n)
383 """
384
385 # set up input and output IO ACK (prev/next ready/valid)
386 self.p = PrevControl(in_multi)
387 self.n = NextControl()
388
389 def connect_to_next(self, nxt):
390 """ helper function to connect to the next stage data/valid/ready.
391 """
392 return self.n.connect_to_next(nxt.p)
393
394 def _connect_in(self, prev):
395 """ internal helper function to connect stage to an input source.
396 do not use to connect stage-to-stage!
397 """
398 return self.p._connect_in(prev.p)
399
400 def _connect_out(self, nxt):
401 """ internal helper function to connect stage to an output source.
402 do not use to connect stage-to-stage!
403 """
404 return self.n._connect_out(nxt.n)
405
406 def connect(self, m, pipechain):
407 """ connects a chain (list) of Pipeline instances together and
408 links them to this ControlBase instance:
409
410 in <----> self <---> out
411 | ^
412 v |
413 [pipe1, pipe2, pipe3, pipe4]
414 | ^ | ^ | ^
415 v | v | v |
416 out---in out--in out---in
417
418 Also takes care of allocating i_data/o_data, by looking up
419 the data spec for each end of the pipechain. i.e It is NOT
420 necessary to allocate self.p.i_data or self.n.o_data manually:
421 this is handled AUTOMATICALLY, here.
422
423 Basically this function is the direct equivalent of StageChain,
424 except that unlike StageChain, the Pipeline logic is followed.
425
426 Just as StageChain presents an object that conforms to the
427 Stage API from a list of objects that also conform to the
428 Stage API, an object that calls this Pipeline connect function
429 has the exact same pipeline API as the list of pipline objects
430 it is called with.
431
432 Thus it becomes possible to build up larger chains recursively.
433 More complex chains (multi-input, multi-output) will have to be
434 done manually.
435 """
436 eqs = [] # collated list of assignment statements
437
438 # connect inter-chain
439 for i in range(len(pipechain)-1):
440 pipe1 = pipechain[i]
441 pipe2 = pipechain[i+1]
442 eqs += pipe1.connect_to_next(pipe2)
443
444 # connect front of chain to ourselves
445 front = pipechain[0]
446 self.p.i_data = front.stage.ispec()
447 eqs += front._connect_in(self)
448
449 # connect end of chain to ourselves
450 end = pipechain[-1]
451 self.n.o_data = end.stage.ospec()
452 eqs += end._connect_out(self)
453
454 # activate the assignments
455 m.d.comb += eqs
456
457 def set_input(self, i):
458 """ helper function to set the input data
459 """
460 return eq(self.p.i_data, i)
461
462 def ports(self):
463 return [self.p.i_valid, self.n.i_ready,
464 self.n.o_valid, self.p.o_ready,
465 self.p.i_data, self.n.o_data # XXX need flattening!
466 ]
467
468
469 class BufferedPipeline(ControlBase):
470 """ buffered pipeline stage. data and strobe signals travel in sync.
471 if ever the input is ready and the output is not, processed data
472 is shunted in a temporary register.
473
474 Argument: stage. see Stage API above
475
476 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
477 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
478 stage-1 p.i_data >>in stage n.o_data out>> stage+1
479 | |
480 process --->----^
481 | |
482 +-- r_data ->-+
483
484 input data p.i_data is read (only), is processed and goes into an
485 intermediate result store [process()]. this is updated combinatorially.
486
487 in a non-stall condition, the intermediate result will go into the
488 output (update_output). however if ever there is a stall, it goes
489 into r_data instead [update_buffer()].
490
491 when the non-stall condition is released, r_data is the first
492 to be transferred to the output [flush_buffer()], and the stall
493 condition cleared.
494
495 on the next cycle (as long as stall is not raised again) the
496 input may begin to be processed and transferred directly to output.
497
498 """
499 def __init__(self, stage):
500 ControlBase.__init__(self)
501 self.stage = stage
502
503 # set up the input and output data
504 self.p.i_data = stage.ispec() # input type
505 self.n.o_data = stage.ospec()
506
507 def elaborate(self, platform):
508 m = Module()
509
510 result = self.stage.ospec()
511 r_data = self.stage.ospec()
512 if hasattr(self.stage, "setup"):
513 self.stage.setup(m, self.p.i_data)
514
515 # establish some combinatorial temporaries
516 o_n_validn = Signal(reset_less=True)
517 i_p_valid_o_p_ready = Signal(reset_less=True)
518 p_i_valid = Signal(reset_less=True)
519 m.d.comb += [p_i_valid.eq(self.p.i_valid_logic()),
520 o_n_validn.eq(~self.n.o_valid),
521 i_p_valid_o_p_ready.eq(p_i_valid & self.p.o_ready),
522 ]
523
524 # store result of processing in combinatorial temporary
525 m.d.comb += eq(result, self.stage.process(self.p.i_data))
526
527 # if not in stall condition, update the temporary register
528 with m.If(self.p.o_ready): # not stalled
529 m.d.sync += eq(r_data, result) # update buffer
530
531 with m.If(self.n.i_ready): # next stage is ready
532 with m.If(self.p.o_ready): # not stalled
533 # nothing in buffer: send (processed) input direct to output
534 m.d.sync += [self.n.o_valid.eq(p_i_valid),
535 eq(self.n.o_data, result), # update output
536 ]
537 with m.Else(): # p.o_ready is false, and something is in buffer.
538 # Flush the [already processed] buffer to the output port.
539 m.d.sync += [self.n.o_valid.eq(1), # declare reg empty
540 eq(self.n.o_data, r_data), # flush buffer
541 self.p.o_ready.eq(1), # clear stall condition
542 ]
543 # ignore input, since p.o_ready is also false.
544
545 # (n.i_ready) is false here: next stage is ready
546 with m.Elif(o_n_validn): # next stage being told "ready"
547 m.d.sync += [self.n.o_valid.eq(p_i_valid),
548 self.p.o_ready.eq(1), # Keep the buffer empty
549 eq(self.n.o_data, result), # set output data
550 ]
551
552 # (n.i_ready) false and (n.o_valid) true:
553 with m.Elif(i_p_valid_o_p_ready):
554 # If next stage *is* ready, and not stalled yet, accept input
555 m.d.sync += self.p.o_ready.eq(~(p_i_valid & self.n.o_valid))
556
557 return m
558
559
560 class UnbufferedPipeline(ControlBase):
561 """ A simple pipeline stage with single-clock synchronisation
562 and two-way valid/ready synchronised signalling.
563
564 Note that a stall in one stage will result in the entire pipeline
565 chain stalling.
566
567 Also that unlike BufferedPipeline, the valid/ready signalling does NOT
568 travel synchronously with the data: the valid/ready signalling
569 combines in a *combinatorial* fashion. Therefore, a long pipeline
570 chain will lengthen propagation delays.
571
572 Argument: stage. see Stage API, above
573
574 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
575 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
576 stage-1 p.i_data >>in stage n.o_data out>> stage+1
577 | |
578 r_data result
579 | |
580 +--process ->-+
581
582 Attributes:
583 -----------
584 p.i_data : StageInput, shaped according to ispec
585 The pipeline input
586 p.o_data : StageOutput, shaped according to ospec
587 The pipeline output
588 r_data : input_shape according to ispec
589 A temporary (buffered) copy of a prior (valid) input.
590 This is HELD if the output is not ready. It is updated
591 SYNCHRONOUSLY.
592 result: output_shape according to ospec
593 The output of the combinatorial logic. it is updated
594 COMBINATORIALLY (no clock dependence).
595 """
596
597 def __init__(self, stage):
598 ControlBase.__init__(self)
599 self.stage = stage
600
601 # set up the input and output data
602 self.p.i_data = stage.ispec() # input type
603 self.n.o_data = stage.ospec() # output type
604
605 def elaborate(self, platform):
606 m = Module()
607
608 data_valid = Signal() # is data valid or not
609 r_data = self.stage.ispec() # input type
610 if hasattr(self.stage, "setup"):
611 self.stage.setup(m, r_data)
612
613 p_i_valid = Signal(reset_less=True)
614 m.d.comb += p_i_valid.eq(self.p.i_valid_logic())
615 m.d.comb += self.n.o_valid.eq(data_valid)
616 m.d.comb += self.p.o_ready.eq(~data_valid | self.n.i_ready)
617 m.d.sync += data_valid.eq(p_i_valid | \
618 (~self.n.i_ready & data_valid))
619 with m.If(self.p.i_valid & self.p.o_ready):
620 m.d.sync += eq(r_data, self.p.i_data)
621 m.d.comb += eq(self.n.o_data, self.stage.process(r_data))
622 return m
623
624
625 class PassThroughStage(StageCls):
626 """ a pass-through stage which has its input data spec equal to its output,
627 and "passes through" its data from input to output.
628 """
629 def __init__(self, iospec):
630 self.iospecfn = iospecfn
631 def ispec(self): return self.iospecfn()
632 def ospec(self): return self.iospecfn()
633 def process(self, i): return i
634
635
636 class RegisterPipeline(UnbufferedPipeline):
637 """ A pipeline stage that delays by one clock cycle, creating a
638 sync'd latch out of o_data and o_valid as an indirect byproduct
639 of using PassThroughStage
640 """
641 def __init__(self, iospecfn):
642 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
643