tracking down sync failure when stage not dynamically ready
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedPipeline implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 eq:
5 --
6
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
10 Records.
11
12 Stage API:
13 ---------
14
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
18
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
23 sub-objects
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
29 function
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
36
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
41
42 StageChain:
43 ----------
44
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
48
49 RecordBasedStage:
50 ----------------
51
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
56 examples).
57
58 PassThroughStage:
59 ----------------
60
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
64
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
68
69 ControlBase:
70 -----------
71
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
75 ready/valid/data API.
76
77 UnbufferedPipeline:
78 ------------------
79
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedPipeline). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
84
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedPipeline by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
88 also having to stall.
89
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
93
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
97
98 RegisterPipeline:
99 ----------------
100
101 A convenience class that, because UnbufferedPipeline introduces a single
102 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
103 stage that, duh, delays its (unmodified) input by one clock cycle.
104
105 BufferedPipeline:
106 ----------------
107
108 nmigen implementation of buffered pipeline stage, based on zipcpu:
109 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
110
111 this module requires quite a bit of thought to understand how it works
112 (and why it is needed in the first place). reading the above is
113 *strongly* recommended.
114
115 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
116 the STB / ACK signals to raise and lower (on separate clocks) before
117 data may proceeed (thus only allowing one piece of data to proceed
118 on *ALTERNATE* cycles), the signalling here is a true pipeline
119 where data will flow on *every* clock when the conditions are right.
120
121 input acceptance conditions are when:
122 * incoming previous-stage strobe (p.i_valid) is HIGH
123 * outgoing previous-stage ready (p.o_ready) is LOW
124
125 output transmission conditions are when:
126 * outgoing next-stage strobe (n.o_valid) is HIGH
127 * outgoing next-stage ready (n.i_ready) is LOW
128
129 the tricky bit is when the input has valid data and the output is not
130 ready to accept it. if it wasn't for the clock synchronisation, it
131 would be possible to tell the input "hey don't send that data, we're
132 not ready". unfortunately, it's not possible to "change the past":
133 the previous stage *has no choice* but to pass on its data.
134
135 therefore, the incoming data *must* be accepted - and stored: that
136 is the responsibility / contract that this stage *must* accept.
137 on the same clock, it's possible to tell the input that it must
138 not send any more data. this is the "stall" condition.
139
140 we now effectively have *two* possible pieces of data to "choose" from:
141 the buffered data, and the incoming data. the decision as to which
142 to process and output is based on whether we are in "stall" or not.
143 i.e. when the next stage is no longer ready, the output comes from
144 the buffer if a stall had previously occurred, otherwise it comes
145 direct from processing the input.
146
147 this allows us to respect a synchronous "travelling STB" with what
148 dan calls a "buffered handshake".
149
150 it's quite a complex state machine!
151 """
152
153 from nmigen import Signal, Cat, Const, Mux, Module, Value
154 from nmigen.cli import verilog, rtlil
155 from nmigen.hdl.ast import ArrayProxy
156 from nmigen.hdl.rec import Record, Layout
157
158 from abc import ABCMeta, abstractmethod
159 from collections.abc import Sequence
160
161
162 class PrevControl:
163 """ contains signals that come *from* the previous stage (both in and out)
164 * i_valid: previous stage indicating all incoming data is valid.
165 may be a multi-bit signal, where all bits are required
166 to be asserted to indicate "valid".
167 * o_ready: output to next stage indicating readiness to accept data
168 * i_data : an input - added by the user of this class
169 """
170
171 def __init__(self, i_width=1, stage_ctl=False):
172 self.stage_ctl = stage_ctl
173 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
174 self._o_ready = Signal(name="p_o_ready") # prev <<out self
175 self.i_data = None # XXX MUST BE ADDED BY USER
176 if stage_ctl:
177 self.s_o_ready = Signal(name="p_s_o_rdy") # prev <<out self
178
179 @property
180 def o_ready(self):
181 """ public-facing API: indicates (externally) that stage is ready
182 """
183 if self.stage_ctl:
184 return self.s_o_ready # set dynamically by stage
185 return self._o_ready # return this when not under dynamic control
186
187 def _connect_in(self, prev):
188 """ internal helper function to connect stage to an input source.
189 do not use to connect stage-to-stage!
190 """
191 return [self.i_valid.eq(prev.i_valid),
192 prev.o_ready.eq(self.o_ready),
193 eq(self.i_data, prev.i_data),
194 ]
195
196 def i_valid_logic(self):
197 vlen = len(self.i_valid)
198 if vlen > 1: # multi-bit case: valid only when i_valid is all 1s
199 all1s = Const(-1, (len(self.i_valid), False))
200 return self.i_valid == all1s
201 # single-bit i_valid case
202 return self.i_valid
203
204
205 class NextControl:
206 """ contains the signals that go *to* the next stage (both in and out)
207 * o_valid: output indicating to next stage that data is valid
208 * i_ready: input from next stage indicating that it can accept data
209 * o_data : an output - added by the user of this class
210 """
211 def __init__(self, stage_ctl=False):
212 self.stage_ctl = stage_ctl
213 self._o_valid = Signal(name="n_o_valid") # self out>> next
214 self.i_ready = Signal(name="n_i_ready") # self <<in next
215 self.o_data = None # XXX MUST BE ADDED BY USER
216 if stage_ctl:
217 self.s_o_valid = Signal(name="n_s_o_vld") # self out>> next
218
219 @property
220 def o_valid(self):
221 """ public-facing API: indicates (externally) that data is valid
222 """
223 if self.stage_ctl:
224 return self.s_o_valid
225 return self._o_valid
226
227 def connect_to_next(self, nxt):
228 """ helper function to connect to the next stage data/valid/ready.
229 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
230 use this when connecting stage-to-stage
231 """
232 return [nxt.i_valid.eq(self.o_valid),
233 self.i_ready.eq(nxt.o_ready),
234 eq(nxt.i_data, self.o_data),
235 ]
236
237 def _connect_out(self, nxt):
238 """ internal helper function to connect stage to an output source.
239 do not use to connect stage-to-stage!
240 """
241 return [nxt.o_valid.eq(self.o_valid),
242 self.i_ready.eq(nxt.i_ready),
243 eq(nxt.o_data, self.o_data),
244 ]
245
246
247 def eq(o, i):
248 """ makes signals equal: a helper routine which identifies if it is being
249 passed a list (or tuple) of objects, or signals, or Records, and calls
250 the objects' eq function.
251
252 complex objects (classes) can be used: they must follow the
253 convention of having an eq member function, which takes the
254 responsibility of further calling eq and returning a list of
255 eq assignments
256
257 Record is a special (unusual, recursive) case, where the input may be
258 specified as a dictionary (which may contain further dictionaries,
259 recursively), where the field names of the dictionary must match
260 the Record's field spec. Alternatively, an object with the same
261 member names as the Record may be assigned: it does not have to
262 *be* a Record.
263
264 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
265 has an eq function, the object being assigned to it (e.g. a python
266 object) might not. despite the *input* having an eq function,
267 that doesn't help us, because it's the *ArrayProxy* that's being
268 assigned to. so.... we cheat. use the ports() function of the
269 python object, enumerate them, find out the list of Signals that way,
270 and assign them.
271 """
272 res = []
273 if isinstance(o, dict):
274 for (k, v) in o.items():
275 print ("d-eq", v, i[k])
276 res.append(v.eq(i[k]))
277 return res
278
279 if not isinstance(o, Sequence):
280 o, i = [o], [i]
281 for (ao, ai) in zip(o, i):
282 #print ("eq", ao, ai)
283 if isinstance(ao, Record):
284 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
285 if isinstance(field_shape, Layout):
286 val = ai.fields
287 else:
288 val = ai
289 if hasattr(val, field_name): # check for attribute
290 val = getattr(val, field_name)
291 else:
292 val = val[field_name] # dictionary-style specification
293 rres = eq(ao.fields[field_name], val)
294 res += rres
295 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
296 for p in ai.ports():
297 op = getattr(ao, p.name)
298 #print (op, p, p.name)
299 rres = op.eq(p)
300 if not isinstance(rres, Sequence):
301 rres = [rres]
302 res += rres
303 else:
304 rres = ao.eq(ai)
305 if not isinstance(rres, Sequence):
306 rres = [rres]
307 res += rres
308 return res
309
310
311 class StageCls(metaclass=ABCMeta):
312 """ Class-based "Stage" API. requires instantiation (after derivation)
313
314 see "Stage API" above.. Note: python does *not* require derivation
315 from this class. All that is required is that the pipelines *have*
316 the functions listed in this class. Derivation from this class
317 is therefore merely a "courtesy" to maintainers.
318 """
319 @abstractmethod
320 def ispec(self): pass # REQUIRED
321 @abstractmethod
322 def ospec(self): pass # REQUIRED
323 #@abstractmethod
324 #def setup(self, m, i): pass # OPTIONAL
325 @abstractmethod
326 def process(self, i): pass # REQUIRED
327
328
329 class Stage(metaclass=ABCMeta):
330 """ Static "Stage" API. does not require instantiation (after derivation)
331
332 see "Stage API" above. Note: python does *not* require derivation
333 from this class. All that is required is that the pipelines *have*
334 the functions listed in this class. Derivation from this class
335 is therefore merely a "courtesy" to maintainers.
336 """
337 @staticmethod
338 @abstractmethod
339 def ispec(): pass
340
341 @staticmethod
342 @abstractmethod
343 def ospec(): pass
344
345 #@staticmethod
346 #@abstractmethod
347 #def setup(m, i): pass
348
349 @staticmethod
350 @abstractmethod
351 def process(i): pass
352
353
354 class RecordBasedStage(Stage):
355 """ convenience class which provides a Records-based layout.
356 honestly it's a lot easier just to create a direct Records-based
357 class (see ExampleAddRecordStage)
358 """
359 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
360 self.in_shape = in_shape
361 self.out_shape = out_shape
362 self.__process = processfn
363 self.__setup = setupfn
364 def ispec(self): return Record(self.in_shape)
365 def ospec(self): return Record(self.out_shape)
366 def process(seif, i): return self.__process(i)
367 def setup(seif, m, i): return self.__setup(m, i)
368
369
370 class StageChain(StageCls):
371 """ pass in a list of stages, and they will automatically be
372 chained together via their input and output specs into a
373 combinatorial chain.
374
375 the end result basically conforms to the exact same Stage API.
376
377 * input to this class will be the input of the first stage
378 * output of first stage goes into input of second
379 * output of second goes into input into third (etc. etc.)
380 * the output of this class will be the output of the last stage
381 """
382 def __init__(self, chain, specallocate=False):
383 self.chain = chain
384 self.specallocate = specallocate
385
386 def ispec(self):
387 return self.chain[0].ispec()
388
389 def ospec(self):
390 return self.chain[-1].ospec()
391
392 def setup(self, m, i):
393 for (idx, c) in enumerate(self.chain):
394 if hasattr(c, "setup"):
395 c.setup(m, i) # stage may have some module stuff
396 if self.specallocate:
397 o = self.chain[idx].ospec() # last assignment survives
398 m.d.comb += eq(o, c.process(i)) # process input into "o"
399 else:
400 o = c.process(i) # store input into "o"
401 if idx != len(self.chain)-1:
402 if self.specallocate:
403 ni = self.chain[idx+1].ispec() # new input on next loop
404 m.d.comb += eq(ni, o) # assign to next input
405 i = ni
406 else:
407 i = o
408 self.o = o # last loop is the output
409
410 def process(self, i):
411 return self.o # conform to Stage API: return last-loop output
412
413
414 class ControlBase:
415 """ Common functions for Pipeline API
416 """
417 def __init__(self, in_multi=None, stage_ctl=False):
418 """ Base class containing ready/valid/data to previous and next stages
419
420 * p: contains ready/valid to the previous stage
421 * n: contains ready/valid to the next stage
422
423 Except when calling Controlbase.connect(), user must also:
424 * add i_data member to PrevControl (p) and
425 * add o_data member to NextControl (n)
426 """
427 # set up input and output IO ACK (prev/next ready/valid)
428 self.p = PrevControl(in_multi, stage_ctl)
429 self.n = NextControl(stage_ctl)
430
431 def connect_to_next(self, nxt):
432 """ helper function to connect to the next stage data/valid/ready.
433 """
434 return self.n.connect_to_next(nxt.p)
435
436 def _connect_in(self, prev):
437 """ internal helper function to connect stage to an input source.
438 do not use to connect stage-to-stage!
439 """
440 return self.p._connect_in(prev.p)
441
442 def _connect_out(self, nxt):
443 """ internal helper function to connect stage to an output source.
444 do not use to connect stage-to-stage!
445 """
446 return self.n._connect_out(nxt.n)
447
448 def connect(self, pipechain):
449 """ connects a chain (list) of Pipeline instances together and
450 links them to this ControlBase instance:
451
452 in <----> self <---> out
453 | ^
454 v |
455 [pipe1, pipe2, pipe3, pipe4]
456 | ^ | ^ | ^
457 v | v | v |
458 out---in out--in out---in
459
460 Also takes care of allocating i_data/o_data, by looking up
461 the data spec for each end of the pipechain. i.e It is NOT
462 necessary to allocate self.p.i_data or self.n.o_data manually:
463 this is handled AUTOMATICALLY, here.
464
465 Basically this function is the direct equivalent of StageChain,
466 except that unlike StageChain, the Pipeline logic is followed.
467
468 Just as StageChain presents an object that conforms to the
469 Stage API from a list of objects that also conform to the
470 Stage API, an object that calls this Pipeline connect function
471 has the exact same pipeline API as the list of pipline objects
472 it is called with.
473
474 Thus it becomes possible to build up larger chains recursively.
475 More complex chains (multi-input, multi-output) will have to be
476 done manually.
477 """
478 eqs = [] # collated list of assignment statements
479
480 # connect inter-chain
481 for i in range(len(pipechain)-1):
482 pipe1 = pipechain[i]
483 pipe2 = pipechain[i+1]
484 eqs += pipe1.connect_to_next(pipe2)
485
486 # connect front of chain to ourselves
487 front = pipechain[0]
488 self.p.i_data = front.stage.ispec()
489 eqs += front._connect_in(self)
490
491 # connect end of chain to ourselves
492 end = pipechain[-1]
493 self.n.o_data = end.stage.ospec()
494 eqs += end._connect_out(self)
495
496 return eqs
497
498 def set_input(self, i):
499 """ helper function to set the input data
500 """
501 return eq(self.p.i_data, i)
502
503 def ports(self):
504 res = [self.p.i_valid, self.n.i_ready,
505 self.n.o_valid, self.p.o_ready,
506 ]
507 if hasattr(self.p.i_data, "ports"):
508 res += self.p.i_data.ports()
509 else:
510 res += self.p.i_data
511 if hasattr(self.n.o_data, "ports"):
512 res += self.n.o_data.ports()
513 else:
514 res += self.n.o_data
515 return res
516
517 def _elaborate(self, platform):
518 """ handles case where stage has dynamic ready/valid functions
519 """
520 m = Module()
521 if not self.n.stage_ctl:
522 return m
523
524 # when the pipeline (buffered or otherwise) says "ready",
525 # test the *stage* "ready".
526
527 with m.If(self.p._o_ready):
528 m.d.comb += self.p.s_o_ready.eq(self.stage.p_o_ready)
529 with m.Else():
530 m.d.comb += self.p.s_o_ready.eq(0)
531
532 # when the pipeline (buffered or otherwise) says "valid",
533 # test the *stage* "valid".
534 with m.If(self.n._o_valid):
535 m.d.comb += self.n.s_o_valid.eq(self.stage.n_o_valid)
536 with m.Else():
537 m.d.comb += self.n.s_o_valid.eq(0)
538 return m
539
540
541 class BufferedPipeline(ControlBase):
542 """ buffered pipeline stage. data and strobe signals travel in sync.
543 if ever the input is ready and the output is not, processed data
544 is shunted in a temporary register.
545
546 Argument: stage. see Stage API above
547
548 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
549 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
550 stage-1 p.i_data >>in stage n.o_data out>> stage+1
551 | |
552 process --->----^
553 | |
554 +-- r_data ->-+
555
556 input data p.i_data is read (only), is processed and goes into an
557 intermediate result store [process()]. this is updated combinatorially.
558
559 in a non-stall condition, the intermediate result will go into the
560 output (update_output). however if ever there is a stall, it goes
561 into r_data instead [update_buffer()].
562
563 when the non-stall condition is released, r_data is the first
564 to be transferred to the output [flush_buffer()], and the stall
565 condition cleared.
566
567 on the next cycle (as long as stall is not raised again) the
568 input may begin to be processed and transferred directly to output.
569
570 """
571 def __init__(self, stage, stage_ctl=False):
572 ControlBase.__init__(self, stage_ctl=stage_ctl)
573 self.stage = stage
574
575 # set up the input and output data
576 self.p.i_data = stage.ispec() # input type
577 self.n.o_data = stage.ospec()
578
579 def elaborate(self, platform):
580
581 self.m = ControlBase._elaborate(self, platform)
582
583 result = self.stage.ospec()
584 r_data = self.stage.ospec()
585 if hasattr(self.stage, "setup"):
586 self.stage.setup(self.m, self.p.i_data)
587
588 # establish some combinatorial temporaries
589 o_n_validn = Signal(reset_less=True)
590 i_p_valid_o_p_ready = Signal(reset_less=True)
591 p_i_valid = Signal(reset_less=True)
592 self.m.d.comb += [p_i_valid.eq(self.p.i_valid_logic()),
593 o_n_validn.eq(~self.n.o_valid),
594 i_p_valid_o_p_ready.eq(p_i_valid & self.p.o_ready),
595 ]
596
597 # store result of processing in combinatorial temporary
598 self.m.d.comb += eq(result, self.stage.process(self.p.i_data))
599
600 # if not in stall condition, update the temporary register
601 with self.m.If(self.p.o_ready): # not stalled
602 self.m.d.sync += eq(r_data, result) # update buffer
603
604 with self.m.If(self.n.i_ready): # next stage is ready
605 with self.m.If(self.p._o_ready): # not stalled
606 # nothing in buffer: send (processed) input direct to output
607 self.m.d.sync += [self.n._o_valid.eq(p_i_valid),
608 eq(self.n.o_data, result), # update output
609 ]
610 with self.m.Else(): # p.o_ready is false, and something in buffer
611 # Flush the [already processed] buffer to the output port.
612 self.m.d.sync += [self.n._o_valid.eq(1), # reg empty
613 eq(self.n.o_data, r_data), # flush buffer
614 self.p._o_ready.eq(1), # clear stall
615 ]
616 # ignore input, since p.o_ready is also false.
617
618 # (n.i_ready) is false here: next stage is ready
619 with self.m.Elif(o_n_validn): # next stage being told "ready"
620 self.m.d.sync += [self.n._o_valid.eq(p_i_valid),
621 self.p._o_ready.eq(1), # Keep the buffer empty
622 eq(self.n.o_data, result), # set output data
623 ]
624
625 # (n.i_ready) false and (n.o_valid) true:
626 with self.m.Elif(i_p_valid_o_p_ready):
627 # If next stage *is* ready, and not stalled yet, accept input
628 self.m.d.sync += self.p._o_ready.eq(~(p_i_valid & self.n.o_valid))
629
630 return self.m
631
632
633 class UnbufferedPipeline(ControlBase):
634 """ A simple pipeline stage with single-clock synchronisation
635 and two-way valid/ready synchronised signalling.
636
637 Note that a stall in one stage will result in the entire pipeline
638 chain stalling.
639
640 Also that unlike BufferedPipeline, the valid/ready signalling does NOT
641 travel synchronously with the data: the valid/ready signalling
642 combines in a *combinatorial* fashion. Therefore, a long pipeline
643 chain will lengthen propagation delays.
644
645 Argument: stage. see Stage API, above
646
647 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
648 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
649 stage-1 p.i_data >>in stage n.o_data out>> stage+1
650 | |
651 r_data result
652 | |
653 +--process ->-+
654
655 Attributes:
656 -----------
657 p.i_data : StageInput, shaped according to ispec
658 The pipeline input
659 p.o_data : StageOutput, shaped according to ospec
660 The pipeline output
661 r_data : input_shape according to ispec
662 A temporary (buffered) copy of a prior (valid) input.
663 This is HELD if the output is not ready. It is updated
664 SYNCHRONOUSLY.
665 result: output_shape according to ospec
666 The output of the combinatorial logic. it is updated
667 COMBINATORIALLY (no clock dependence).
668 """
669
670 def __init__(self, stage, stage_ctl=False):
671 ControlBase.__init__(self, stage_ctl=stage_ctl)
672 self.stage = stage
673
674 # set up the input and output data
675 self.p.i_data = stage.ispec() # input type
676 self.n.o_data = stage.ospec() # output type
677
678 def elaborate(self, platform):
679 self.m = ControlBase._elaborate(self, platform)
680
681 data_valid = Signal() # is data valid or not
682 r_data = self.stage.ispec() # input type
683 if hasattr(self.stage, "setup"):
684 self.stage.setup(self.m, r_data)
685
686 # some temporaries
687 p_i_valid = Signal(reset_less=True)
688 pv = Signal(reset_less=True)
689 self.m.d.comb += p_i_valid.eq(self.p.i_valid_logic())
690 self.m.d.comb += pv.eq(self.p.i_valid & self.p.o_ready)
691
692 self.m.d.comb += self.n._o_valid.eq(data_valid)
693 self.m.d.comb += self.p._o_ready.eq(~data_valid | self.n.i_ready)
694 self.m.d.sync += data_valid.eq(p_i_valid | \
695 (~self.n.i_ready & data_valid))
696 with self.m.If(pv):
697 self.m.d.sync += eq(r_data, self.p.i_data)
698 self.m.d.comb += eq(self.n.o_data, self.stage.process(r_data))
699 return self.m
700
701
702 class PassThroughStage(StageCls):
703 """ a pass-through stage which has its input data spec equal to its output,
704 and "passes through" its data from input to output.
705 """
706 def __init__(self, iospecfn):
707 self.iospecfn = iospecfn
708 def ispec(self): return self.iospecfn()
709 def ospec(self): return self.iospecfn()
710 def process(self, i): return i
711
712
713 class RegisterPipeline(UnbufferedPipeline):
714 """ A pipeline stage that delays by one clock cycle, creating a
715 sync'd latch out of o_data and o_valid as an indirect byproduct
716 of using PassThroughStage
717 """
718 def __init__(self, iospecfn):
719 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
720