1 """ Pipeline and BufferedHandshake implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
4 Associated development bugs:
5 * http://bugs.libre-riscv.org/show_bug.cgi?id=64
6 * http://bugs.libre-riscv.org/show_bug.cgi?id=57
11 a strategically very important function that is identical in function
12 to nmigen's Signal.eq function, except it may take objects, or a list
13 of objects, or a tuple of objects, and where objects may also be
19 stage requires compliance with a strict API that may be
20 implemented in several means, including as a static class.
21 the methods of a stage instance must be as follows:
23 * ispec() - Input data format specification
24 returns an object or a list or tuple of objects, or
25 a Record, each object having an "eq" function which
26 takes responsibility for copying by assignment all
28 * ospec() - Output data format specification
29 requirements as for ospec
30 * process(m, i) - Processes an ispec-formatted object
31 returns a combinatorial block of a result that
32 may be assigned to the output, by way of the "eq"
34 * setup(m, i) - Optional function for setting up submodules
35 may be used for more complex stages, to link
36 the input (i) to submodules. must take responsibility
37 for adding those submodules to the module (m).
38 the submodules must be combinatorial blocks and
39 must have their inputs and output linked combinatorially.
41 Both StageCls (for use with non-static classes) and Stage (for use
42 by static classes) are abstract classes from which, for convenience
43 and as a courtesy to other developers, anything conforming to the
44 Stage API may *choose* to derive.
49 A useful combinatorial wrapper around stages that chains them together
50 and then presents a Stage-API-conformant interface. By presenting
51 the same API as the stages it wraps, it can clearly be used recursively.
56 A convenience class that takes an input shape, output shape, a
57 "processing" function and an optional "setup" function. Honestly
58 though, there's not much more effort to just... create a class
59 that returns a couple of Records (see ExampleAddRecordStage in
65 A convenience class that takes a single function as a parameter,
66 that is chain-called to create the exact same input and output spec.
67 It has a process() function that simply returns its input.
69 Instances of this class are completely redundant if handed to
70 StageChain, however when passed to UnbufferedPipeline they
71 can be used to introduce a single clock delay.
76 The base class for pipelines. Contains previous and next ready/valid/data.
77 Also has an extremely useful "connect" function that can be used to
78 connect a chain of pipelines and present the exact same prev/next
84 A simple stalling clock-synchronised pipeline that has no buffering
85 (unlike BufferedHandshake). Data flows on *every* clock cycle when
86 the conditions are right (this is nominally when the input is valid
87 and the output is ready).
89 A stall anywhere along the line will result in a stall back-propagating
90 down the entire chain. The BufferedHandshake by contrast will buffer
91 incoming data, allowing previous stages one clock cycle's grace before
94 An advantage of the UnbufferedPipeline over the Buffered one is
95 that the amount of logic needed (number of gates) is greatly
96 reduced (no second set of buffers basically)
98 The disadvantage of the UnbufferedPipeline is that the valid/ready
99 logic, if chained together, is *combinatorial*, resulting in
100 progressively larger gate delay.
102 PassThroughHandshake:
105 A Control class that introduces a single clock delay, passing its
106 data through unaltered. Unlike RegisterPipeline (which relies
107 on UnbufferedPipeline and PassThroughStage) it handles ready/valid
113 A convenience class that, because UnbufferedPipeline introduces a single
114 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
115 stage that, duh, delays its (unmodified) input by one clock cycle.
120 nmigen implementation of buffered pipeline stage, based on zipcpu:
121 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
123 this module requires quite a bit of thought to understand how it works
124 (and why it is needed in the first place). reading the above is
125 *strongly* recommended.
127 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
128 the STB / ACK signals to raise and lower (on separate clocks) before
129 data may proceeed (thus only allowing one piece of data to proceed
130 on *ALTERNATE* cycles), the signalling here is a true pipeline
131 where data will flow on *every* clock when the conditions are right.
133 input acceptance conditions are when:
134 * incoming previous-stage strobe (p.i_valid) is HIGH
135 * outgoing previous-stage ready (p.o_ready) is LOW
137 output transmission conditions are when:
138 * outgoing next-stage strobe (n.o_valid) is HIGH
139 * outgoing next-stage ready (n.i_ready) is LOW
141 the tricky bit is when the input has valid data and the output is not
142 ready to accept it. if it wasn't for the clock synchronisation, it
143 would be possible to tell the input "hey don't send that data, we're
144 not ready". unfortunately, it's not possible to "change the past":
145 the previous stage *has no choice* but to pass on its data.
147 therefore, the incoming data *must* be accepted - and stored: that
148 is the responsibility / contract that this stage *must* accept.
149 on the same clock, it's possible to tell the input that it must
150 not send any more data. this is the "stall" condition.
152 we now effectively have *two* possible pieces of data to "choose" from:
153 the buffered data, and the incoming data. the decision as to which
154 to process and output is based on whether we are in "stall" or not.
155 i.e. when the next stage is no longer ready, the output comes from
156 the buffer if a stall had previously occurred, otherwise it comes
157 direct from processing the input.
159 this allows us to respect a synchronous "travelling STB" with what
160 dan calls a "buffered handshake".
162 it's quite a complex state machine!
167 Synchronised pipeline, Based on:
168 https://github.com/ZipCPU/dbgbus/blob/master/hexbus/rtl/hbdeword.v
171 from nmigen
import Signal
, Cat
, Const
, Mux
, Module
, Value
172 from nmigen
.cli
import verilog
, rtlil
173 from nmigen
.lib
.fifo
import SyncFIFO
, SyncFIFOBuffered
174 from nmigen
.hdl
.ast
import ArrayProxy
175 from nmigen
.hdl
.rec
import Record
, Layout
177 from abc
import ABCMeta
, abstractmethod
178 from collections
.abc
import Sequence
179 from queue
import Queue
182 class RecordObject(Record
):
183 def __init__(self
, layout
=None, name
=None):
184 Record
.__init
__(self
, layout
=layout
or [], name
=None)
186 def __setattr__(self
, k
, v
):
187 if k
in dir(Record
) or "fields" not in self
.__dict
__:
188 return object.__setattr
__(self
, k
, v
)
190 if isinstance(v
, Record
):
191 newlayout
= {k
: (k
, v
.layout
)}
193 newlayout
= {k
: (k
, v
.shape())}
194 self
.layout
.fields
.update(newlayout
)
197 for x
in self
.fields
.values():
202 """ contains signals that come *from* the previous stage (both in and out)
203 * i_valid: previous stage indicating all incoming data is valid.
204 may be a multi-bit signal, where all bits are required
205 to be asserted to indicate "valid".
206 * o_ready: output to next stage indicating readiness to accept data
207 * i_data : an input - added by the user of this class
210 def __init__(self
, i_width
=1, stage_ctl
=False):
211 self
.stage_ctl
= stage_ctl
212 self
.i_valid
= Signal(i_width
, name
="p_i_valid") # prev >>in self
213 self
._o
_ready
= Signal(name
="p_o_ready") # prev <<out self
214 self
.i_data
= None # XXX MUST BE ADDED BY USER
216 self
.s_o_ready
= Signal(name
="p_s_o_rdy") # prev <<out self
220 """ public-facing API: indicates (externally) that stage is ready
223 return self
.s_o_ready
# set dynamically by stage
224 return self
._o
_ready
# return this when not under dynamic control
226 def _connect_in(self
, prev
, direct
=False, fn
=None):
227 """ internal helper function to connect stage to an input source.
228 do not use to connect stage-to-stage!
230 i_valid
= prev
.i_valid
if direct
else prev
.i_valid_test
231 i_data
= fn(prev
.i_data
) if fn
is not None else prev
.i_data
232 return [self
.i_valid
.eq(i_valid
),
233 prev
.o_ready
.eq(self
.o_ready
),
234 eq(self
.i_data
, i_data
),
238 def i_valid_test(self
):
239 vlen
= len(self
.i_valid
)
241 # multi-bit case: valid only when i_valid is all 1s
242 all1s
= Const(-1, (len(self
.i_valid
), False))
243 i_valid
= (self
.i_valid
== all1s
)
245 # single-bit i_valid case
246 i_valid
= self
.i_valid
248 # when stage indicates not ready, incoming data
249 # must "appear" to be not ready too
251 i_valid
= i_valid
& self
.s_o_ready
257 """ contains the signals that go *to* the next stage (both in and out)
258 * o_valid: output indicating to next stage that data is valid
259 * i_ready: input from next stage indicating that it can accept data
260 * o_data : an output - added by the user of this class
262 def __init__(self
, stage_ctl
=False):
263 self
.stage_ctl
= stage_ctl
264 self
.o_valid
= Signal(name
="n_o_valid") # self out>> next
265 self
.i_ready
= Signal(name
="n_i_ready") # self <<in next
266 self
.o_data
= None # XXX MUST BE ADDED BY USER
268 self
.d_valid
= Signal(reset
=1) # INTERNAL (data valid)
271 def i_ready_test(self
):
273 return self
.i_ready
& self
.d_valid
276 def connect_to_next(self
, nxt
):
277 """ helper function to connect to the next stage data/valid/ready.
278 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
279 use this when connecting stage-to-stage
281 return [nxt
.i_valid
.eq(self
.o_valid
),
282 self
.i_ready
.eq(nxt
.o_ready
),
283 eq(nxt
.i_data
, self
.o_data
),
286 def _connect_out(self
, nxt
, direct
=False, fn
=None):
287 """ internal helper function to connect stage to an output source.
288 do not use to connect stage-to-stage!
290 i_ready
= nxt
.i_ready
if direct
else nxt
.i_ready_test
291 o_data
= fn(nxt
.o_data
) if fn
is not None else nxt
.o_data
292 return [nxt
.o_valid
.eq(self
.o_valid
),
293 self
.i_ready
.eq(i_ready
),
294 eq(o_data
, self
.o_data
),
299 """ a helper class for iterating twin-argument compound data structures.
301 Record is a special (unusual, recursive) case, where the input may be
302 specified as a dictionary (which may contain further dictionaries,
303 recursively), where the field names of the dictionary must match
304 the Record's field spec. Alternatively, an object with the same
305 member names as the Record may be assigned: it does not have to
308 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
309 has an eq function, the object being assigned to it (e.g. a python
310 object) might not. despite the *input* having an eq function,
311 that doesn't help us, because it's the *ArrayProxy* that's being
312 assigned to. so.... we cheat. use the ports() function of the
313 python object, enumerate them, find out the list of Signals that way,
316 def iterator2(self
, o
, i
):
317 if isinstance(o
, dict):
318 yield from self
.dict_iter2(o
, i
)
320 if not isinstance(o
, Sequence
):
322 for (ao
, ai
) in zip(o
, i
):
323 #print ("visit", fn, ao, ai)
324 if isinstance(ao
, Record
):
325 yield from self
.record_iter2(ao
, ai
)
326 elif isinstance(ao
, ArrayProxy
) and not isinstance(ai
, Value
):
327 yield from self
.arrayproxy_iter2(ao
, ai
)
331 def dict_iter2(self
, o
, i
):
332 for (k
, v
) in o
.items():
333 print ("d-iter", v
, i
[k
])
337 def _not_quite_working_with_all_unit_tests_record_iter2(self
, ao
, ai
):
338 print ("record_iter2", ao
, ai
, type(ao
), type(ai
))
339 if isinstance(ai
, Value
):
340 if isinstance(ao
, Sequence
):
342 for o
, i
in zip(ao
, ai
):
345 for idx
, (field_name
, field_shape
, _
) in enumerate(ao
.layout
):
346 if isinstance(field_shape
, Layout
):
350 if hasattr(val
, field_name
): # check for attribute
351 val
= getattr(val
, field_name
)
353 val
= val
[field_name
] # dictionary-style specification
354 yield from self
.iterator2(ao
.fields
[field_name
], val
)
356 def record_iter2(self
, ao
, ai
):
357 for idx
, (field_name
, field_shape
, _
) in enumerate(ao
.layout
):
358 if isinstance(field_shape
, Layout
):
362 if hasattr(val
, field_name
): # check for attribute
363 val
= getattr(val
, field_name
)
365 val
= val
[field_name
] # dictionary-style specification
366 yield from self
.iterator2(ao
.fields
[field_name
], val
)
368 def arrayproxy_iter2(self
, ao
, ai
):
370 op
= getattr(ao
, p
.name
)
371 print ("arrayproxy - p", p
, p
.name
)
372 yield from self
.iterator2(op
, p
)
376 """ a helper class for iterating single-argument compound data structures.
379 def iterate(self
, i
):
380 """ iterate a compound structure recursively using yield
382 if not isinstance(i
, Sequence
):
385 print ("iterate", ai
)
386 if isinstance(ai
, Record
):
387 print ("record", list(ai
.layout
))
388 yield from self
.record_iter(ai
)
389 elif isinstance(ai
, ArrayProxy
) and not isinstance(ai
, Value
):
390 yield from self
.array_iter(ai
)
394 def record_iter(self
, ai
):
395 for idx
, (field_name
, field_shape
, _
) in enumerate(ai
.layout
):
396 if isinstance(field_shape
, Layout
):
400 if hasattr(val
, field_name
): # check for attribute
401 val
= getattr(val
, field_name
)
403 val
= val
[field_name
] # dictionary-style specification
404 print ("recidx", idx
, field_name
, field_shape
, val
)
405 yield from self
.iterate(val
)
407 def array_iter(self
, ai
):
409 yield from self
.iterate(p
)
413 """ makes signals equal: a helper routine which identifies if it is being
414 passed a list (or tuple) of objects, or signals, or Records, and calls
415 the objects' eq function.
418 for (ao
, ai
) in Visitor2().iterator2(o
, i
):
420 if not isinstance(rres
, Sequence
):
427 """ flattens a compound structure recursively using Cat
429 from nmigen
.tools
import flatten
430 #res = list(flatten(i)) # works (as of nmigen commit f22106e5) HOWEVER...
431 res
= list(Visitor().iterate(i
)) # needed because input may be a sequence
435 class StageCls(metaclass
=ABCMeta
):
436 """ Class-based "Stage" API. requires instantiation (after derivation)
438 see "Stage API" above.. Note: python does *not* require derivation
439 from this class. All that is required is that the pipelines *have*
440 the functions listed in this class. Derivation from this class
441 is therefore merely a "courtesy" to maintainers.
444 def ispec(self
): pass # REQUIRED
446 def ospec(self
): pass # REQUIRED
448 #def setup(self, m, i): pass # OPTIONAL
450 def process(self
, i
): pass # REQUIRED
453 class Stage(metaclass
=ABCMeta
):
454 """ Static "Stage" API. does not require instantiation (after derivation)
456 see "Stage API" above. Note: python does *not* require derivation
457 from this class. All that is required is that the pipelines *have*
458 the functions listed in this class. Derivation from this class
459 is therefore merely a "courtesy" to maintainers.
471 #def setup(m, i): pass
478 class RecordBasedStage(Stage
):
479 """ convenience class which provides a Records-based layout.
480 honestly it's a lot easier just to create a direct Records-based
481 class (see ExampleAddRecordStage)
483 def __init__(self
, in_shape
, out_shape
, processfn
, setupfn
=None):
484 self
.in_shape
= in_shape
485 self
.out_shape
= out_shape
486 self
.__process
= processfn
487 self
.__setup
= setupfn
488 def ispec(self
): return Record(self
.in_shape
)
489 def ospec(self
): return Record(self
.out_shape
)
490 def process(seif
, i
): return self
.__process
(i
)
491 def setup(seif
, m
, i
): return self
.__setup
(m
, i
)
494 class StageChain(StageCls
):
495 """ pass in a list of stages, and they will automatically be
496 chained together via their input and output specs into a
499 the end result basically conforms to the exact same Stage API.
501 * input to this class will be the input of the first stage
502 * output of first stage goes into input of second
503 * output of second goes into input into third (etc. etc.)
504 * the output of this class will be the output of the last stage
506 def __init__(self
, chain
, specallocate
=False):
508 self
.specallocate
= specallocate
511 return self
.chain
[0].ispec()
514 return self
.chain
[-1].ospec()
516 def _specallocate_setup(self
, m
, i
):
517 for (idx
, c
) in enumerate(self
.chain
):
518 if hasattr(c
, "setup"):
519 c
.setup(m
, i
) # stage may have some module stuff
520 o
= self
.chain
[idx
].ospec() # last assignment survives
521 m
.d
.comb
+= eq(o
, c
.process(i
)) # process input into "o"
522 if idx
== len(self
.chain
)-1:
524 i
= self
.chain
[idx
+1].ispec() # new input on next loop
525 m
.d
.comb
+= eq(i
, o
) # assign to next input
526 return o
# last loop is the output
528 def _noallocate_setup(self
, m
, i
):
529 for (idx
, c
) in enumerate(self
.chain
):
530 if hasattr(c
, "setup"):
531 c
.setup(m
, i
) # stage may have some module stuff
532 i
= o
= c
.process(i
) # store input into "o"
533 return o
# last loop is the output
535 def setup(self
, m
, i
):
536 if self
.specallocate
:
537 self
.o
= self
._specallocate
_setup
(m
, i
)
539 self
.o
= self
._noallocate
_setup
(m
, i
)
541 def process(self
, i
):
542 return self
.o
# conform to Stage API: return last-loop output
546 """ Common functions for Pipeline API
548 def __init__(self
, stage
=None, in_multi
=None, stage_ctl
=False):
549 """ Base class containing ready/valid/data to previous and next stages
551 * p: contains ready/valid to the previous stage
552 * n: contains ready/valid to the next stage
554 Except when calling Controlbase.connect(), user must also:
555 * add i_data member to PrevControl (p) and
556 * add o_data member to NextControl (n)
560 # set up input and output IO ACK (prev/next ready/valid)
561 self
.p
= PrevControl(in_multi
, stage_ctl
)
562 self
.n
= NextControl(stage_ctl
)
564 # set up the input and output data
565 if stage
is not None:
566 self
.p
.i_data
= stage
.ispec() # input type
567 self
.n
.o_data
= stage
.ospec()
569 def connect_to_next(self
, nxt
):
570 """ helper function to connect to the next stage data/valid/ready.
572 return self
.n
.connect_to_next(nxt
.p
)
574 def _connect_in(self
, prev
):
575 """ internal helper function to connect stage to an input source.
576 do not use to connect stage-to-stage!
578 return self
.p
._connect
_in
(prev
.p
)
580 def _connect_out(self
, nxt
):
581 """ internal helper function to connect stage to an output source.
582 do not use to connect stage-to-stage!
584 return self
.n
._connect
_out
(nxt
.n
)
586 def connect(self
, pipechain
):
587 """ connects a chain (list) of Pipeline instances together and
588 links them to this ControlBase instance:
590 in <----> self <---> out
593 [pipe1, pipe2, pipe3, pipe4]
596 out---in out--in out---in
598 Also takes care of allocating i_data/o_data, by looking up
599 the data spec for each end of the pipechain. i.e It is NOT
600 necessary to allocate self.p.i_data or self.n.o_data manually:
601 this is handled AUTOMATICALLY, here.
603 Basically this function is the direct equivalent of StageChain,
604 except that unlike StageChain, the Pipeline logic is followed.
606 Just as StageChain presents an object that conforms to the
607 Stage API from a list of objects that also conform to the
608 Stage API, an object that calls this Pipeline connect function
609 has the exact same pipeline API as the list of pipline objects
612 Thus it becomes possible to build up larger chains recursively.
613 More complex chains (multi-input, multi-output) will have to be
616 eqs
= [] # collated list of assignment statements
618 # connect inter-chain
619 for i
in range(len(pipechain
)-1):
621 pipe2
= pipechain
[i
+1]
622 eqs
+= pipe1
.connect_to_next(pipe2
)
624 # connect front of chain to ourselves
626 self
.p
.i_data
= front
.stage
.ispec()
627 eqs
+= front
._connect
_in
(self
)
629 # connect end of chain to ourselves
631 self
.n
.o_data
= end
.stage
.ospec()
632 eqs
+= end
._connect
_out
(self
)
636 def _postprocess(self
, i
): # XXX DISABLED
637 return i
# RETURNS INPUT
638 if hasattr(self
.stage
, "postprocess"):
639 return self
.stage
.postprocess(i
)
642 def set_input(self
, i
):
643 """ helper function to set the input data
645 return eq(self
.p
.i_data
, i
)
648 res
= [self
.p
.i_valid
, self
.n
.i_ready
,
649 self
.n
.o_valid
, self
.p
.o_ready
,
651 if hasattr(self
.p
.i_data
, "ports"):
652 res
+= self
.p
.i_data
.ports()
655 if hasattr(self
.n
.o_data
, "ports"):
656 res
+= self
.n
.o_data
.ports()
661 def _elaborate(self
, platform
):
662 """ handles case where stage has dynamic ready/valid functions
666 if self
.stage
is not None and hasattr(self
.stage
, "setup"):
667 self
.stage
.setup(m
, self
.p
.i_data
)
669 if not self
.p
.stage_ctl
:
672 # intercept the previous (outgoing) "ready", combine with stage ready
673 m
.d
.comb
+= self
.p
.s_o_ready
.eq(self
.p
._o
_ready
& self
.stage
.d_ready
)
675 # intercept the next (incoming) "ready" and combine it with data valid
676 sdv
= self
.stage
.d_valid(self
.n
.i_ready
)
677 m
.d
.comb
+= self
.n
.d_valid
.eq(self
.n
.i_ready
& sdv
)
682 class BufferedHandshake(ControlBase
):
683 """ buffered pipeline stage. data and strobe signals travel in sync.
684 if ever the input is ready and the output is not, processed data
685 is shunted in a temporary register.
687 Argument: stage. see Stage API above
689 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
690 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
691 stage-1 p.i_data >>in stage n.o_data out>> stage+1
697 input data p.i_data is read (only), is processed and goes into an
698 intermediate result store [process()]. this is updated combinatorially.
700 in a non-stall condition, the intermediate result will go into the
701 output (update_output). however if ever there is a stall, it goes
702 into r_data instead [update_buffer()].
704 when the non-stall condition is released, r_data is the first
705 to be transferred to the output [flush_buffer()], and the stall
708 on the next cycle (as long as stall is not raised again) the
709 input may begin to be processed and transferred directly to output.
712 def elaborate(self
, platform
):
713 self
.m
= ControlBase
._elaborate
(self
, platform
)
715 result
= self
.stage
.ospec()
716 r_data
= self
.stage
.ospec()
718 # establish some combinatorial temporaries
719 o_n_validn
= Signal(reset_less
=True)
720 n_i_ready
= Signal(reset_less
=True, name
="n_i_rdy_data")
721 nir_por
= Signal(reset_less
=True)
722 nir_por_n
= Signal(reset_less
=True)
723 p_i_valid
= Signal(reset_less
=True)
724 nir_novn
= Signal(reset_less
=True)
725 nirn_novn
= Signal(reset_less
=True)
726 por_pivn
= Signal(reset_less
=True)
727 npnn
= Signal(reset_less
=True)
728 self
.m
.d
.comb
+= [p_i_valid
.eq(self
.p
.i_valid_test
),
729 o_n_validn
.eq(~self
.n
.o_valid
),
730 n_i_ready
.eq(self
.n
.i_ready_test
),
731 nir_por
.eq(n_i_ready
& self
.p
._o
_ready
),
732 nir_por_n
.eq(n_i_ready
& ~self
.p
._o
_ready
),
733 nir_novn
.eq(n_i_ready | o_n_validn
),
734 nirn_novn
.eq(~n_i_ready
& o_n_validn
),
735 npnn
.eq(nir_por | nirn_novn
),
736 por_pivn
.eq(self
.p
._o
_ready
& ~p_i_valid
)
739 # store result of processing in combinatorial temporary
740 self
.m
.d
.comb
+= eq(result
, self
.stage
.process(self
.p
.i_data
))
742 # if not in stall condition, update the temporary register
743 with self
.m
.If(self
.p
.o_ready
): # not stalled
744 self
.m
.d
.sync
+= eq(r_data
, result
) # update buffer
746 # data pass-through conditions
747 with self
.m
.If(npnn
):
748 o_data
= self
._postprocess
(result
)
749 self
.m
.d
.sync
+= [self
.n
.o_valid
.eq(p_i_valid
), # valid if p_valid
750 eq(self
.n
.o_data
, o_data
), # update output
752 # buffer flush conditions (NOTE: can override data passthru conditions)
753 with self
.m
.If(nir_por_n
): # not stalled
754 # Flush the [already processed] buffer to the output port.
755 o_data
= self
._postprocess
(r_data
)
756 self
.m
.d
.sync
+= [self
.n
.o_valid
.eq(1), # reg empty
757 eq(self
.n
.o_data
, o_data
), # flush buffer
759 # output ready conditions
760 self
.m
.d
.sync
+= self
.p
._o
_ready
.eq(nir_novn | por_pivn
)
765 class SimpleHandshake(ControlBase
):
766 """ simple handshake control. data and strobe signals travel in sync.
767 implements the protocol used by Wishbone and AXI4.
769 Argument: stage. see Stage API above
771 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
772 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
773 stage-1 p.i_data >>in stage n.o_data out>> stage+1
778 Inputs Temporary Output Data
779 ------- ---------- ----- ----
780 P P N N PiV& ~NiR& N P
787 0 0 1 0 0 0 0 1 process(i_data)
788 0 0 1 1 0 0 0 1 process(i_data)
792 0 1 1 0 0 0 0 1 process(i_data)
793 0 1 1 1 0 0 0 1 process(i_data)
797 1 0 1 0 0 0 0 1 process(i_data)
798 1 0 1 1 0 0 0 1 process(i_data)
800 1 1 0 0 1 0 1 0 process(i_data)
801 1 1 0 1 1 1 1 0 process(i_data)
802 1 1 1 0 1 0 1 1 process(i_data)
803 1 1 1 1 1 0 1 1 process(i_data)
807 def elaborate(self
, platform
):
808 self
.m
= m
= ControlBase
._elaborate
(self
, platform
)
811 result
= self
.stage
.ospec()
813 # establish some combinatorial temporaries
814 n_i_ready
= Signal(reset_less
=True, name
="n_i_rdy_data")
815 p_i_valid_p_o_ready
= Signal(reset_less
=True)
816 p_i_valid
= Signal(reset_less
=True)
817 m
.d
.comb
+= [p_i_valid
.eq(self
.p
.i_valid_test
),
818 n_i_ready
.eq(self
.n
.i_ready_test
),
819 p_i_valid_p_o_ready
.eq(p_i_valid
& self
.p
.o_ready
),
822 # store result of processing in combinatorial temporary
823 m
.d
.comb
+= eq(result
, self
.stage
.process(self
.p
.i_data
))
825 # previous valid and ready
826 with m
.If(p_i_valid_p_o_ready
):
827 o_data
= self
._postprocess
(result
)
828 m
.d
.sync
+= [r_busy
.eq(1), # output valid
829 eq(self
.n
.o_data
, o_data
), # update output
831 # previous invalid or not ready, however next is accepting
832 with m
.Elif(n_i_ready
):
833 o_data
= self
._postprocess
(result
)
834 m
.d
.sync
+= [eq(self
.n
.o_data
, o_data
)]
835 # TODO: could still send data here (if there was any)
836 #m.d.sync += self.n.o_valid.eq(0) # ...so set output invalid
837 m
.d
.sync
+= r_busy
.eq(0) # ...so set output invalid
839 m
.d
.comb
+= self
.n
.o_valid
.eq(r_busy
)
840 # if next is ready, so is previous
841 m
.d
.comb
+= self
.p
._o
_ready
.eq(n_i_ready
)
846 class UnbufferedPipeline(ControlBase
):
847 """ A simple pipeline stage with single-clock synchronisation
848 and two-way valid/ready synchronised signalling.
850 Note that a stall in one stage will result in the entire pipeline
853 Also that unlike BufferedHandshake, the valid/ready signalling does NOT
854 travel synchronously with the data: the valid/ready signalling
855 combines in a *combinatorial* fashion. Therefore, a long pipeline
856 chain will lengthen propagation delays.
858 Argument: stage. see Stage API, above
860 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
861 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
862 stage-1 p.i_data >>in stage n.o_data out>> stage+1
870 p.i_data : StageInput, shaped according to ispec
872 p.o_data : StageOutput, shaped according to ospec
874 r_data : input_shape according to ispec
875 A temporary (buffered) copy of a prior (valid) input.
876 This is HELD if the output is not ready. It is updated
878 result: output_shape according to ospec
879 The output of the combinatorial logic. it is updated
880 COMBINATORIALLY (no clock dependence).
884 Inputs Temp Output Data
906 1 1 0 0 0 1 1 process(i_data)
907 1 1 0 1 1 1 0 process(i_data)
908 1 1 1 0 0 1 1 process(i_data)
909 1 1 1 1 0 1 1 process(i_data)
912 Note: PoR is *NOT* involved in the above decision-making.
915 def elaborate(self
, platform
):
916 self
.m
= m
= ControlBase
._elaborate
(self
, platform
)
918 data_valid
= Signal() # is data valid or not
919 r_data
= self
.stage
.ospec() # output type
922 p_i_valid
= Signal(reset_less
=True)
923 pv
= Signal(reset_less
=True)
924 buf_full
= Signal(reset_less
=True)
925 m
.d
.comb
+= p_i_valid
.eq(self
.p
.i_valid_test
)
926 m
.d
.comb
+= pv
.eq(self
.p
.i_valid
& self
.p
.o_ready
)
927 m
.d
.comb
+= buf_full
.eq(~self
.n
.i_ready_test
& data_valid
)
929 m
.d
.comb
+= self
.n
.o_valid
.eq(data_valid
)
930 m
.d
.comb
+= self
.p
._o
_ready
.eq(~data_valid | self
.n
.i_ready_test
)
931 m
.d
.sync
+= data_valid
.eq(p_i_valid | buf_full
)
934 m
.d
.sync
+= eq(r_data
, self
.stage
.process(self
.p
.i_data
))
935 o_data
= self
._postprocess
(r_data
)
936 m
.d
.comb
+= eq(self
.n
.o_data
, o_data
)
940 class UnbufferedPipeline2(ControlBase
):
941 """ A simple pipeline stage with single-clock synchronisation
942 and two-way valid/ready synchronised signalling.
944 Note that a stall in one stage will result in the entire pipeline
947 Also that unlike BufferedHandshake, the valid/ready signalling does NOT
948 travel synchronously with the data: the valid/ready signalling
949 combines in a *combinatorial* fashion. Therefore, a long pipeline
950 chain will lengthen propagation delays.
952 Argument: stage. see Stage API, above
954 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
955 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
956 stage-1 p.i_data >>in stage n.o_data out>> stage+1
961 p.i_data : StageInput, shaped according to ispec
963 p.o_data : StageOutput, shaped according to ospec
965 buf : output_shape according to ospec
966 A temporary (buffered) copy of a valid output
967 This is HELD if the output is not ready. It is updated
970 Inputs Temp Output Data
972 P P N N ~NiR& N P (buf_full)
977 0 0 0 0 0 0 1 process(i_data)
978 0 0 0 1 1 1 0 reg (odata, unchanged)
979 0 0 1 0 0 0 1 process(i_data)
980 0 0 1 1 0 0 1 process(i_data)
982 0 1 0 0 0 0 1 process(i_data)
983 0 1 0 1 1 1 0 reg (odata, unchanged)
984 0 1 1 0 0 0 1 process(i_data)
985 0 1 1 1 0 0 1 process(i_data)
987 1 0 0 0 0 1 1 process(i_data)
988 1 0 0 1 1 1 0 reg (odata, unchanged)
989 1 0 1 0 0 1 1 process(i_data)
990 1 0 1 1 0 1 1 process(i_data)
992 1 1 0 0 0 1 1 process(i_data)
993 1 1 0 1 1 1 0 reg (odata, unchanged)
994 1 1 1 0 0 1 1 process(i_data)
995 1 1 1 1 0 1 1 process(i_data)
998 Note: PoR is *NOT* involved in the above decision-making.
1001 def elaborate(self
, platform
):
1002 self
.m
= m
= ControlBase
._elaborate
(self
, platform
)
1004 buf_full
= Signal() # is data valid or not
1005 buf
= self
.stage
.ospec() # output type
1008 p_i_valid
= Signal(reset_less
=True)
1009 m
.d
.comb
+= p_i_valid
.eq(self
.p
.i_valid_test
)
1011 m
.d
.comb
+= self
.n
.o_valid
.eq(buf_full | p_i_valid
)
1012 m
.d
.comb
+= self
.p
._o
_ready
.eq(~buf_full
)
1013 m
.d
.sync
+= buf_full
.eq(~self
.n
.i_ready_test
& self
.n
.o_valid
)
1015 o_data
= Mux(buf_full
, buf
, self
.stage
.process(self
.p
.i_data
))
1016 o_data
= self
._postprocess
(o_data
)
1017 m
.d
.comb
+= eq(self
.n
.o_data
, o_data
)
1018 m
.d
.sync
+= eq(buf
, self
.n
.o_data
)
1023 class PassThroughStage(StageCls
):
1024 """ a pass-through stage which has its input data spec equal to its output,
1025 and "passes through" its data from input to output.
1027 def __init__(self
, iospecfn
):
1028 self
.iospecfn
= iospecfn
1029 def ispec(self
): return self
.iospecfn()
1030 def ospec(self
): return self
.iospecfn()
1031 def process(self
, i
): return i
1034 class PassThroughHandshake(ControlBase
):
1035 """ A control block that delays by one clock cycle.
1037 Inputs Temporary Output Data
1038 ------- ------------------ ----- ----
1039 P P N N PiV& PiV| NiR| pvr N P (pvr)
1040 i o i o PoR ~PoR ~NoV o o
1044 0 0 0 0 0 1 1 0 1 1 odata (unchanged)
1045 0 0 0 1 0 1 0 0 1 0 odata (unchanged)
1046 0 0 1 0 0 1 1 0 1 1 odata (unchanged)
1047 0 0 1 1 0 1 1 0 1 1 odata (unchanged)
1049 0 1 0 0 0 0 1 0 0 1 odata (unchanged)
1050 0 1 0 1 0 0 0 0 0 0 odata (unchanged)
1051 0 1 1 0 0 0 1 0 0 1 odata (unchanged)
1052 0 1 1 1 0 0 1 0 0 1 odata (unchanged)
1054 1 0 0 0 0 1 1 1 1 1 process(in)
1055 1 0 0 1 0 1 0 0 1 0 odata (unchanged)
1056 1 0 1 0 0 1 1 1 1 1 process(in)
1057 1 0 1 1 0 1 1 1 1 1 process(in)
1059 1 1 0 0 1 1 1 1 1 1 process(in)
1060 1 1 0 1 1 1 0 0 1 0 odata (unchanged)
1061 1 1 1 0 1 1 1 1 1 1 process(in)
1062 1 1 1 1 1 1 1 1 1 1 process(in)
1067 def elaborate(self
, platform
):
1068 self
.m
= m
= ControlBase
._elaborate
(self
, platform
)
1070 r_data
= self
.stage
.ospec() # output type
1073 p_i_valid
= Signal(reset_less
=True)
1074 pvr
= Signal(reset_less
=True)
1075 m
.d
.comb
+= p_i_valid
.eq(self
.p
.i_valid_test
)
1076 m
.d
.comb
+= pvr
.eq(p_i_valid
& self
.p
.o_ready
)
1078 m
.d
.comb
+= self
.p
.o_ready
.eq(~self
.n
.o_valid | self
.n
.i_ready_test
)
1079 m
.d
.sync
+= self
.n
.o_valid
.eq(p_i_valid | ~self
.p
.o_ready
)
1081 odata
= Mux(pvr
, self
.stage
.process(self
.p
.i_data
), r_data
)
1082 m
.d
.sync
+= eq(r_data
, odata
)
1083 r_data
= self
._postprocess
(r_data
)
1084 m
.d
.comb
+= eq(self
.n
.o_data
, r_data
)
1089 class RegisterPipeline(UnbufferedPipeline
):
1090 """ A pipeline stage that delays by one clock cycle, creating a
1091 sync'd latch out of o_data and o_valid as an indirect byproduct
1092 of using PassThroughStage
1094 def __init__(self
, iospecfn
):
1095 UnbufferedPipeline
.__init
__(self
, PassThroughStage(iospecfn
))
1098 class FIFOControl(ControlBase
):
1099 """ FIFO Control. Uses SyncFIFO to store data, coincidentally
1100 happens to have same valid/ready signalling as Stage API.
1102 i_data -> fifo.din -> FIFO -> fifo.dout -> o_data
1105 def __init__(self
, depth
, stage
, in_multi
=None, stage_ctl
=False,
1106 fwft
=True, buffered
=False, pipe
=False):
1109 * depth: number of entries in the FIFO
1110 * stage: data processing block
1111 * fwft : first word fall-thru mode (non-fwft introduces delay)
1112 * buffered: use buffered FIFO (introduces extra cycle delay)
1114 NOTE 1: FPGAs may have trouble with the defaults for SyncFIFO
1115 (fwft=True, buffered=False)
1117 NOTE 2: i_data *must* have a shape function. it can therefore
1118 be a Signal, or a Record, or a RecordObject.
1120 data is processed (and located) as follows:
1122 self.p self.stage temp fn temp fn temp fp self.n
1123 i_data->process()->result->cat->din.FIFO.dout->cat(o_data)
1125 yes, really: cat produces a Cat() which can be assigned to.
1126 this is how the FIFO gets de-catted without needing a de-cat
1130 assert not (fwft
and buffered
), "buffered cannot do fwft"
1134 self
.buffered
= buffered
1137 ControlBase
.__init
__(self
, stage
, in_multi
, stage_ctl
)
1139 def elaborate(self
, platform
):
1140 self
.m
= m
= ControlBase
._elaborate
(self
, platform
)
1142 # make a FIFO with a signal of equal width to the o_data.
1143 (fwidth
, _
) = self
.n
.o_data
.shape()
1145 fifo
= SyncFIFOBuffered(fwidth
, self
.fdepth
)
1147 fifo
= Queue(fwidth
, self
.fdepth
, fwft
=self
.fwft
, pipe
=self
.pipe
)
1148 m
.submodules
.fifo
= fifo
1150 # store result of processing in combinatorial temporary
1151 result
= self
.stage
.ospec()
1152 m
.d
.comb
+= eq(result
, self
.stage
.process(self
.p
.i_data
))
1154 # connect previous rdy/valid/data - do cat on i_data
1155 # NOTE: cannot do the PrevControl-looking trick because
1156 # of need to process the data. shaaaame....
1157 m
.d
.comb
+= [fifo
.we
.eq(self
.p
.i_valid_test
),
1158 self
.p
.o_ready
.eq(fifo
.writable
),
1159 eq(fifo
.din
, cat(result
)),
1162 # connect next rdy/valid/data - do cat on o_data
1163 connections
= [self
.n
.o_valid
.eq(fifo
.readable
),
1164 fifo
.re
.eq(self
.n
.i_ready_test
),
1166 if self
.fwft
or self
.buffered
:
1167 m
.d
.comb
+= connections
1169 m
.d
.sync
+= connections
# unbuffered fwft mode needs sync
1170 o_data
= cat(self
.n
.o_data
).eq(fifo
.dout
)
1171 o_data
= self
._postprocess
(o_data
)
1178 class UnbufferedPipeline(FIFOControl
):
1179 def __init__(self
, stage
, in_multi
=None, stage_ctl
=False):
1180 FIFOControl
.__init
__(self
, 1, stage
, in_multi
, stage_ctl
,
1181 fwft
=True, pipe
=False)
1183 # aka "BreakReadyStage" XXX had to set fwft=True to get it to work
1184 class PassThroughHandshake(FIFOControl
):
1185 def __init__(self
, stage
, in_multi
=None, stage_ctl
=False):
1186 FIFOControl
.__init
__(self
, 1, stage
, in_multi
, stage_ctl
,
1187 fwft
=True, pipe
=True)
1189 # this is *probably* BufferedHandshake, although test #997 now succeeds.
1190 class BufferedHandshake(FIFOControl
):
1191 def __init__(self
, stage
, in_multi
=None, stage_ctl
=False):
1192 FIFOControl
.__init
__(self
, 2, stage
, in_multi
, stage_ctl
,
1193 fwft
=True, pipe
=False)
1197 # this is *probably* SimpleHandshake (note: memory cell size=0)
1198 class SimpleHandshake(FIFOControl):
1199 def __init__(self, stage, in_multi=None, stage_ctl=False):
1200 FIFOControl.__init__(self, 0, stage, in_multi, stage_ctl,
1201 fwft=True, pipe=False)