1 """ Pipeline and BufferedPipeline implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedPipeline). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedPipeline by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
101 A convenience class that, because UnbufferedPipeline introduces a single
102 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
103 stage that, duh, delays its (unmodified) input by one clock cycle.
108 nmigen implementation of buffered pipeline stage, based on zipcpu:
109 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
111 this module requires quite a bit of thought to understand how it works
112 (and why it is needed in the first place). reading the above is
113 *strongly* recommended.
115 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
116 the STB / ACK signals to raise and lower (on separate clocks) before
117 data may proceeed (thus only allowing one piece of data to proceed
118 on *ALTERNATE* cycles), the signalling here is a true pipeline
119 where data will flow on *every* clock when the conditions are right.
121 input acceptance conditions are when:
122 * incoming previous-stage strobe (p.i_valid) is HIGH
123 * outgoing previous-stage ready (p.o_ready) is LOW
125 output transmission conditions are when:
126 * outgoing next-stage strobe (n.o_valid) is HIGH
127 * outgoing next-stage ready (n.i_ready) is LOW
129 the tricky bit is when the input has valid data and the output is not
130 ready to accept it. if it wasn't for the clock synchronisation, it
131 would be possible to tell the input "hey don't send that data, we're
132 not ready". unfortunately, it's not possible to "change the past":
133 the previous stage *has no choice* but to pass on its data.
135 therefore, the incoming data *must* be accepted - and stored: that
136 is the responsibility / contract that this stage *must* accept.
137 on the same clock, it's possible to tell the input that it must
138 not send any more data. this is the "stall" condition.
140 we now effectively have *two* possible pieces of data to "choose" from:
141 the buffered data, and the incoming data. the decision as to which
142 to process and output is based on whether we are in "stall" or not.
143 i.e. when the next stage is no longer ready, the output comes from
144 the buffer if a stall had previously occurred, otherwise it comes
145 direct from processing the input.
147 this allows us to respect a synchronous "travelling STB" with what
148 dan calls a "buffered handshake".
150 it's quite a complex state machine!
153 from nmigen
import Signal
, Cat
, Const
, Mux
, Module
, Value
154 from nmigen
.cli
import verilog
, rtlil
155 from nmigen
.hdl
.ast
import ArrayProxy
156 from nmigen
.hdl
.rec
import Record
, Layout
158 from abc
import ABCMeta
, abstractmethod
159 from collections
.abc
import Sequence
163 """ contains signals that come *from* the previous stage (both in and out)
164 * i_valid: previous stage indicating all incoming data is valid.
165 may be a multi-bit signal, where all bits are required
166 to be asserted to indicate "valid".
167 * o_ready: output to next stage indicating readiness to accept data
168 * i_data : an input - added by the user of this class
171 def __init__(self
, i_width
=1, stage_ctl
=False):
172 self
.stage_ctl
= stage_ctl
173 self
.i_valid
= Signal(i_width
, name
="p_i_valid") # prev >>in self
174 self
._o
_ready
= Signal(name
="p_o_ready") # prev <<out self
175 self
.i_data
= None # XXX MUST BE ADDED BY USER
177 self
.s_o_ready
= Signal(name
="p_s_o_rdy") # prev <<out self
181 """ public-facing API: indicates (externally) that stage is ready
184 return self
.s_o_ready
# set dynamically by stage
185 return self
._o
_ready
# return this when not under dynamic control
187 def _connect_in(self
, prev
):
188 """ internal helper function to connect stage to an input source.
189 do not use to connect stage-to-stage!
191 return [self
.i_valid
.eq(prev
.i_valid
),
192 prev
.o_ready
.eq(self
.o_ready
),
193 eq(self
.i_data
, prev
.i_data
),
196 def i_valid_logic(self
):
197 vlen
= len(self
.i_valid
)
198 if vlen
> 1: # multi-bit case: valid only when i_valid is all 1s
199 all1s
= Const(-1, (len(self
.i_valid
), False))
201 return self
.i_valid
== all1s
& self
.s_o_ready
202 return self
.i_valid
== all1s
203 # single-bit i_valid case
205 return self
.i_valid
& self
.s_o_ready
210 """ contains the signals that go *to* the next stage (both in and out)
211 * o_valid: output indicating to next stage that data is valid
212 * i_ready: input from next stage indicating that it can accept data
213 * o_data : an output - added by the user of this class
215 def __init__(self
, stage_ctl
=False):
216 self
.stage_ctl
= stage_ctl
217 self
._o
_valid
= Signal(name
="n_o_valid") # self out>> next
218 self
.i_ready
= Signal(name
="n_i_ready") # self <<in next
219 self
.o_data
= None # XXX MUST BE ADDED BY USER
221 self
.s_o_valid
= Signal(name
="n_s_o_vld") # self out>> next
225 """ public-facing API: indicates (externally) that data is valid
228 return self
.s_o_valid
231 def connect_to_next(self
, nxt
):
232 """ helper function to connect to the next stage data/valid/ready.
233 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
234 use this when connecting stage-to-stage
236 return [nxt
.i_valid
.eq(self
.o_valid
),
237 self
.i_ready
.eq(nxt
.o_ready
),
238 eq(nxt
.i_data
, self
.o_data
),
241 def _connect_out(self
, nxt
):
242 """ internal helper function to connect stage to an output source.
243 do not use to connect stage-to-stage!
245 return [nxt
.o_valid
.eq(self
.o_valid
),
246 self
.i_ready
.eq(nxt
.i_ready
),
247 eq(nxt
.o_data
, self
.o_data
),
252 """ makes signals equal: a helper routine which identifies if it is being
253 passed a list (or tuple) of objects, or signals, or Records, and calls
254 the objects' eq function.
256 complex objects (classes) can be used: they must follow the
257 convention of having an eq member function, which takes the
258 responsibility of further calling eq and returning a list of
261 Record is a special (unusual, recursive) case, where the input may be
262 specified as a dictionary (which may contain further dictionaries,
263 recursively), where the field names of the dictionary must match
264 the Record's field spec. Alternatively, an object with the same
265 member names as the Record may be assigned: it does not have to
268 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
269 has an eq function, the object being assigned to it (e.g. a python
270 object) might not. despite the *input* having an eq function,
271 that doesn't help us, because it's the *ArrayProxy* that's being
272 assigned to. so.... we cheat. use the ports() function of the
273 python object, enumerate them, find out the list of Signals that way,
277 if isinstance(o
, dict):
278 for (k
, v
) in o
.items():
279 print ("d-eq", v
, i
[k
])
280 res
.append(v
.eq(i
[k
]))
283 if not isinstance(o
, Sequence
):
285 for (ao
, ai
) in zip(o
, i
):
286 #print ("eq", ao, ai)
287 if isinstance(ao
, Record
):
288 for idx
, (field_name
, field_shape
, _
) in enumerate(ao
.layout
):
289 if isinstance(field_shape
, Layout
):
293 if hasattr(val
, field_name
): # check for attribute
294 val
= getattr(val
, field_name
)
296 val
= val
[field_name
] # dictionary-style specification
297 rres
= eq(ao
.fields
[field_name
], val
)
299 elif isinstance(ao
, ArrayProxy
) and not isinstance(ai
, Value
):
301 op
= getattr(ao
, p
.name
)
302 #print (op, p, p.name)
304 if not isinstance(rres
, Sequence
):
309 if not isinstance(rres
, Sequence
):
315 class StageCls(metaclass
=ABCMeta
):
316 """ Class-based "Stage" API. requires instantiation (after derivation)
318 see "Stage API" above.. Note: python does *not* require derivation
319 from this class. All that is required is that the pipelines *have*
320 the functions listed in this class. Derivation from this class
321 is therefore merely a "courtesy" to maintainers.
324 def ispec(self
): pass # REQUIRED
326 def ospec(self
): pass # REQUIRED
328 #def setup(self, m, i): pass # OPTIONAL
330 def process(self
, i
): pass # REQUIRED
333 class Stage(metaclass
=ABCMeta
):
334 """ Static "Stage" API. does not require instantiation (after derivation)
336 see "Stage API" above. Note: python does *not* require derivation
337 from this class. All that is required is that the pipelines *have*
338 the functions listed in this class. Derivation from this class
339 is therefore merely a "courtesy" to maintainers.
351 #def setup(m, i): pass
358 class RecordBasedStage(Stage
):
359 """ convenience class which provides a Records-based layout.
360 honestly it's a lot easier just to create a direct Records-based
361 class (see ExampleAddRecordStage)
363 def __init__(self
, in_shape
, out_shape
, processfn
, setupfn
=None):
364 self
.in_shape
= in_shape
365 self
.out_shape
= out_shape
366 self
.__process
= processfn
367 self
.__setup
= setupfn
368 def ispec(self
): return Record(self
.in_shape
)
369 def ospec(self
): return Record(self
.out_shape
)
370 def process(seif
, i
): return self
.__process
(i
)
371 def setup(seif
, m
, i
): return self
.__setup
(m
, i
)
374 class StageChain(StageCls
):
375 """ pass in a list of stages, and they will automatically be
376 chained together via their input and output specs into a
379 the end result basically conforms to the exact same Stage API.
381 * input to this class will be the input of the first stage
382 * output of first stage goes into input of second
383 * output of second goes into input into third (etc. etc.)
384 * the output of this class will be the output of the last stage
386 def __init__(self
, chain
, specallocate
=False):
388 self
.specallocate
= specallocate
391 return self
.chain
[0].ispec()
394 return self
.chain
[-1].ospec()
396 def setup(self
, m
, i
):
397 for (idx
, c
) in enumerate(self
.chain
):
398 if hasattr(c
, "setup"):
399 c
.setup(m
, i
) # stage may have some module stuff
400 if self
.specallocate
:
401 o
= self
.chain
[idx
].ospec() # last assignment survives
402 m
.d
.comb
+= eq(o
, c
.process(i
)) # process input into "o"
404 o
= c
.process(i
) # store input into "o"
405 if idx
!= len(self
.chain
)-1:
406 if self
.specallocate
:
407 ni
= self
.chain
[idx
+1].ispec() # new input on next loop
408 m
.d
.comb
+= eq(ni
, o
) # assign to next input
412 self
.o
= o
# last loop is the output
414 def process(self
, i
):
415 return self
.o
# conform to Stage API: return last-loop output
419 """ Common functions for Pipeline API
421 def __init__(self
, in_multi
=None, stage_ctl
=False):
422 """ Base class containing ready/valid/data to previous and next stages
424 * p: contains ready/valid to the previous stage
425 * n: contains ready/valid to the next stage
427 Except when calling Controlbase.connect(), user must also:
428 * add i_data member to PrevControl (p) and
429 * add o_data member to NextControl (n)
431 # set up input and output IO ACK (prev/next ready/valid)
432 self
.p
= PrevControl(in_multi
, stage_ctl
)
433 self
.n
= NextControl(stage_ctl
)
435 def connect_to_next(self
, nxt
):
436 """ helper function to connect to the next stage data/valid/ready.
438 return self
.n
.connect_to_next(nxt
.p
)
440 def _connect_in(self
, prev
):
441 """ internal helper function to connect stage to an input source.
442 do not use to connect stage-to-stage!
444 return self
.p
._connect
_in
(prev
.p
)
446 def _connect_out(self
, nxt
):
447 """ internal helper function to connect stage to an output source.
448 do not use to connect stage-to-stage!
450 return self
.n
._connect
_out
(nxt
.n
)
452 def connect(self
, pipechain
):
453 """ connects a chain (list) of Pipeline instances together and
454 links them to this ControlBase instance:
456 in <----> self <---> out
459 [pipe1, pipe2, pipe3, pipe4]
462 out---in out--in out---in
464 Also takes care of allocating i_data/o_data, by looking up
465 the data spec for each end of the pipechain. i.e It is NOT
466 necessary to allocate self.p.i_data or self.n.o_data manually:
467 this is handled AUTOMATICALLY, here.
469 Basically this function is the direct equivalent of StageChain,
470 except that unlike StageChain, the Pipeline logic is followed.
472 Just as StageChain presents an object that conforms to the
473 Stage API from a list of objects that also conform to the
474 Stage API, an object that calls this Pipeline connect function
475 has the exact same pipeline API as the list of pipline objects
478 Thus it becomes possible to build up larger chains recursively.
479 More complex chains (multi-input, multi-output) will have to be
482 eqs
= [] # collated list of assignment statements
484 # connect inter-chain
485 for i
in range(len(pipechain
)-1):
487 pipe2
= pipechain
[i
+1]
488 eqs
+= pipe1
.connect_to_next(pipe2
)
490 # connect front of chain to ourselves
492 self
.p
.i_data
= front
.stage
.ispec()
493 eqs
+= front
._connect
_in
(self
)
495 # connect end of chain to ourselves
497 self
.n
.o_data
= end
.stage
.ospec()
498 eqs
+= end
._connect
_out
(self
)
502 def set_input(self
, i
):
503 """ helper function to set the input data
505 return eq(self
.p
.i_data
, i
)
508 res
= [self
.p
.i_valid
, self
.n
.i_ready
,
509 self
.n
.o_valid
, self
.p
.o_ready
,
511 if hasattr(self
.p
.i_data
, "ports"):
512 res
+= self
.p
.i_data
.ports()
515 if hasattr(self
.n
.o_data
, "ports"):
516 res
+= self
.n
.o_data
.ports()
521 def _elaborate(self
, platform
):
522 """ handles case where stage has dynamic ready/valid functions
525 if not self
.n
.stage_ctl
:
528 # when the pipeline (buffered or otherwise) says "ready",
529 # test the *stage* "ready".
531 with m
.If(self
.p
._o
_ready
):
532 m
.d
.comb
+= self
.p
.s_o_ready
.eq(self
.stage
.p_o_ready
)
534 m
.d
.comb
+= self
.p
.s_o_ready
.eq(0)
536 # when the pipeline (buffered or otherwise) says "valid",
537 # test the *stage* "valid".
538 with m
.If(self
.n
._o
_valid
):
539 m
.d
.comb
+= self
.n
.s_o_valid
.eq(self
.stage
.n_o_valid
)
541 m
.d
.comb
+= self
.n
.s_o_valid
.eq(0)
545 class BufferedPipeline(ControlBase
):
546 """ buffered pipeline stage. data and strobe signals travel in sync.
547 if ever the input is ready and the output is not, processed data
548 is shunted in a temporary register.
550 Argument: stage. see Stage API above
552 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
553 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
554 stage-1 p.i_data >>in stage n.o_data out>> stage+1
560 input data p.i_data is read (only), is processed and goes into an
561 intermediate result store [process()]. this is updated combinatorially.
563 in a non-stall condition, the intermediate result will go into the
564 output (update_output). however if ever there is a stall, it goes
565 into r_data instead [update_buffer()].
567 when the non-stall condition is released, r_data is the first
568 to be transferred to the output [flush_buffer()], and the stall
571 on the next cycle (as long as stall is not raised again) the
572 input may begin to be processed and transferred directly to output.
575 def __init__(self
, stage
, stage_ctl
=False):
576 ControlBase
.__init
__(self
, stage_ctl
=stage_ctl
)
579 # set up the input and output data
580 self
.p
.i_data
= stage
.ispec() # input type
581 self
.n
.o_data
= stage
.ospec()
583 def elaborate(self
, platform
):
585 self
.m
= ControlBase
._elaborate
(self
, platform
)
587 result
= self
.stage
.ospec()
588 r_data
= self
.stage
.ospec()
589 if hasattr(self
.stage
, "setup"):
590 self
.stage
.setup(self
.m
, self
.p
.i_data
)
592 # establish some combinatorial temporaries
593 o_n_validn
= Signal(reset_less
=True)
594 i_p_valid_o_p_ready
= Signal(reset_less
=True)
595 p_i_valid
= Signal(reset_less
=True)
596 self
.m
.d
.comb
+= [p_i_valid
.eq(self
.p
.i_valid_logic()),
597 o_n_validn
.eq(~self
.n
.o_valid
),
598 i_p_valid_o_p_ready
.eq(p_i_valid
& self
.p
.o_ready
),
601 # store result of processing in combinatorial temporary
602 self
.m
.d
.comb
+= eq(result
, self
.stage
.process(self
.p
.i_data
))
604 # if not in stall condition, update the temporary register
605 with self
.m
.If(self
.p
.o_ready
): # not stalled
606 self
.m
.d
.sync
+= eq(r_data
, result
) # update buffer
608 with self
.m
.If(self
.n
.i_ready
): # next stage is ready
609 with self
.m
.If(self
.p
._o
_ready
): # not stalled
610 # nothing in buffer: send (processed) input direct to output
611 self
.m
.d
.sync
+= [self
.n
._o
_valid
.eq(p_i_valid
),
612 eq(self
.n
.o_data
, result
), # update output
614 with self
.m
.Else(): # p.o_ready is false, and something in buffer
615 # Flush the [already processed] buffer to the output port.
616 self
.m
.d
.sync
+= [self
.n
._o
_valid
.eq(1), # reg empty
617 eq(self
.n
.o_data
, r_data
), # flush buffer
618 self
.p
._o
_ready
.eq(1), # clear stall
620 # ignore input, since p.o_ready is also false.
622 # (n.i_ready) is false here: next stage is ready
623 with self
.m
.Elif(o_n_validn
): # next stage being told "ready"
624 self
.m
.d
.sync
+= [self
.n
._o
_valid
.eq(p_i_valid
),
625 self
.p
._o
_ready
.eq(1), # Keep the buffer empty
626 eq(self
.n
.o_data
, result
), # set output data
629 # (n.i_ready) false and (n.o_valid) true:
630 with self
.m
.Elif(i_p_valid_o_p_ready
):
631 # If next stage *is* ready, and not stalled yet, accept input
632 self
.m
.d
.sync
+= self
.p
._o
_ready
.eq(~
(p_i_valid
& self
.n
.o_valid
))
637 class UnbufferedPipeline(ControlBase
):
638 """ A simple pipeline stage with single-clock synchronisation
639 and two-way valid/ready synchronised signalling.
641 Note that a stall in one stage will result in the entire pipeline
644 Also that unlike BufferedPipeline, the valid/ready signalling does NOT
645 travel synchronously with the data: the valid/ready signalling
646 combines in a *combinatorial* fashion. Therefore, a long pipeline
647 chain will lengthen propagation delays.
649 Argument: stage. see Stage API, above
651 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
652 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
653 stage-1 p.i_data >>in stage n.o_data out>> stage+1
661 p.i_data : StageInput, shaped according to ispec
663 p.o_data : StageOutput, shaped according to ospec
665 r_data : input_shape according to ispec
666 A temporary (buffered) copy of a prior (valid) input.
667 This is HELD if the output is not ready. It is updated
669 result: output_shape according to ospec
670 The output of the combinatorial logic. it is updated
671 COMBINATORIALLY (no clock dependence).
674 def __init__(self
, stage
, stage_ctl
=False):
675 ControlBase
.__init
__(self
, stage_ctl
=stage_ctl
)
678 # set up the input and output data
679 self
.p
.i_data
= stage
.ispec() # input type
680 self
.n
.o_data
= stage
.ospec() # output type
682 def elaborate(self
, platform
):
683 self
.m
= ControlBase
._elaborate
(self
, platform
)
685 data_valid
= Signal() # is data valid or not
686 r_data
= self
.stage
.ispec() # input type
687 if hasattr(self
.stage
, "setup"):
688 self
.stage
.setup(self
.m
, r_data
)
691 p_i_valid
= Signal(reset_less
=True)
692 pv
= Signal(reset_less
=True)
693 self
.m
.d
.comb
+= p_i_valid
.eq(self
.p
.i_valid_logic())
694 self
.m
.d
.comb
+= pv
.eq(self
.p
.i_valid
& self
.p
.o_ready
)
696 self
.m
.d
.comb
+= self
.n
._o
_valid
.eq(data_valid
)
697 self
.m
.d
.comb
+= self
.p
._o
_ready
.eq(~data_valid | self
.n
.i_ready
)
698 self
.m
.d
.sync
+= data_valid
.eq(p_i_valid | \
699 (~self
.n
.i_ready
& data_valid
))
701 self
.m
.d
.sync
+= eq(r_data
, self
.p
.i_data
)
702 self
.m
.d
.comb
+= eq(self
.n
.o_data
, self
.stage
.process(r_data
))
706 class PassThroughStage(StageCls
):
707 """ a pass-through stage which has its input data spec equal to its output,
708 and "passes through" its data from input to output.
710 def __init__(self
, iospecfn
):
711 self
.iospecfn
= iospecfn
712 def ispec(self
): return self
.iospecfn()
713 def ospec(self
): return self
.iospecfn()
714 def process(self
, i
): return i
717 class RegisterPipeline(UnbufferedPipeline
):
718 """ A pipeline stage that delays by one clock cycle, creating a
719 sync'd latch out of o_data and o_valid as an indirect byproduct
720 of using PassThroughStage
722 def __init__(self
, iospecfn
):
723 UnbufferedPipeline
.__init
__(self
, PassThroughStage(iospecfn
))