use simpler logic for s_o_ready and d_valid
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedPipeline implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 eq:
5 --
6
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
10 Records.
11
12 Stage API:
13 ---------
14
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
18
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
23 sub-objects
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
29 function
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
36
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
41
42 StageChain:
43 ----------
44
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
48
49 RecordBasedStage:
50 ----------------
51
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
56 examples).
57
58 PassThroughStage:
59 ----------------
60
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
64
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
68
69 ControlBase:
70 -----------
71
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
75 ready/valid/data API.
76
77 UnbufferedPipeline:
78 ------------------
79
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedPipeline). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
84
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedPipeline by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
88 also having to stall.
89
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
93
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
97
98 RegisterPipeline:
99 ----------------
100
101 A convenience class that, because UnbufferedPipeline introduces a single
102 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
103 stage that, duh, delays its (unmodified) input by one clock cycle.
104
105 BufferedPipeline:
106 ----------------
107
108 nmigen implementation of buffered pipeline stage, based on zipcpu:
109 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
110
111 this module requires quite a bit of thought to understand how it works
112 (and why it is needed in the first place). reading the above is
113 *strongly* recommended.
114
115 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
116 the STB / ACK signals to raise and lower (on separate clocks) before
117 data may proceeed (thus only allowing one piece of data to proceed
118 on *ALTERNATE* cycles), the signalling here is a true pipeline
119 where data will flow on *every* clock when the conditions are right.
120
121 input acceptance conditions are when:
122 * incoming previous-stage strobe (p.i_valid) is HIGH
123 * outgoing previous-stage ready (p.o_ready) is LOW
124
125 output transmission conditions are when:
126 * outgoing next-stage strobe (n.o_valid) is HIGH
127 * outgoing next-stage ready (n.i_ready) is LOW
128
129 the tricky bit is when the input has valid data and the output is not
130 ready to accept it. if it wasn't for the clock synchronisation, it
131 would be possible to tell the input "hey don't send that data, we're
132 not ready". unfortunately, it's not possible to "change the past":
133 the previous stage *has no choice* but to pass on its data.
134
135 therefore, the incoming data *must* be accepted - and stored: that
136 is the responsibility / contract that this stage *must* accept.
137 on the same clock, it's possible to tell the input that it must
138 not send any more data. this is the "stall" condition.
139
140 we now effectively have *two* possible pieces of data to "choose" from:
141 the buffered data, and the incoming data. the decision as to which
142 to process and output is based on whether we are in "stall" or not.
143 i.e. when the next stage is no longer ready, the output comes from
144 the buffer if a stall had previously occurred, otherwise it comes
145 direct from processing the input.
146
147 this allows us to respect a synchronous "travelling STB" with what
148 dan calls a "buffered handshake".
149
150 it's quite a complex state machine!
151 """
152
153 from nmigen import Signal, Cat, Const, Mux, Module, Value
154 from nmigen.cli import verilog, rtlil
155 from nmigen.hdl.ast import ArrayProxy
156 from nmigen.hdl.rec import Record, Layout
157
158 from abc import ABCMeta, abstractmethod
159 from collections.abc import Sequence
160
161
162 class PrevControl:
163 """ contains signals that come *from* the previous stage (both in and out)
164 * i_valid: previous stage indicating all incoming data is valid.
165 may be a multi-bit signal, where all bits are required
166 to be asserted to indicate "valid".
167 * o_ready: output to next stage indicating readiness to accept data
168 * i_data : an input - added by the user of this class
169 """
170
171 def __init__(self, i_width=1, stage_ctl=False):
172 self.stage_ctl = stage_ctl
173 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
174 self._o_ready = Signal(name="p_o_ready") # prev <<out self
175 self.i_data = None # XXX MUST BE ADDED BY USER
176 if stage_ctl:
177 self.s_o_ready = Signal(name="p_s_o_rdy") # prev <<out self
178
179 @property
180 def o_ready(self):
181 """ public-facing API: indicates (externally) that stage is ready
182 """
183 if self.stage_ctl:
184 return self.s_o_ready # set dynamically by stage
185 return self._o_ready # return this when not under dynamic control
186
187 def _connect_in(self, prev):
188 """ internal helper function to connect stage to an input source.
189 do not use to connect stage-to-stage!
190 """
191 return [self.i_valid.eq(prev.i_valid_test),
192 prev.o_ready.eq(self.o_ready),
193 eq(self.i_data, prev.i_data),
194 ]
195
196 @property
197 def i_valid_test(self):
198 vlen = len(self.i_valid)
199 if vlen > 1:
200 # multi-bit case: valid only when i_valid is all 1s
201 all1s = Const(-1, (len(self.i_valid), False))
202 i_valid = (self.i_valid == all1s)
203 else:
204 # single-bit i_valid case
205 i_valid = self.i_valid
206
207 # when stage indicates not ready, incoming data
208 # must "appear" to be not ready too
209 if self.stage_ctl:
210 i_valid = i_valid & self.s_o_ready
211
212 return i_valid
213
214
215 class NextControl:
216 """ contains the signals that go *to* the next stage (both in and out)
217 * o_valid: output indicating to next stage that data is valid
218 * i_ready: input from next stage indicating that it can accept data
219 * o_data : an output - added by the user of this class
220 """
221 def __init__(self, stage_ctl=False):
222 self.stage_ctl = stage_ctl
223 self.o_valid = Signal(name="n_o_valid") # self out>> next
224 self.i_ready = Signal(name="n_i_ready") # self <<in next
225 self.o_data = None # XXX MUST BE ADDED BY USER
226 self.d_valid = Signal(reset=1) # INTERNAL (data valid)
227
228 @property
229 def i_ready_test(self):
230 if self.stage_ctl:
231 return self.i_ready & self.d_valid
232 return self.i_ready
233
234 def connect_to_next(self, nxt):
235 """ helper function to connect to the next stage data/valid/ready.
236 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
237 use this when connecting stage-to-stage
238 """
239 return [nxt.i_valid.eq(self.o_valid),
240 self.i_ready.eq(nxt.o_ready),
241 eq(nxt.i_data, self.o_data),
242 ]
243
244 def _connect_out(self, nxt):
245 """ internal helper function to connect stage to an output source.
246 do not use to connect stage-to-stage!
247 """
248 return [nxt.o_valid.eq(self.o_valid),
249 self.i_ready.eq(nxt.i_ready_test),
250 eq(nxt.o_data, self.o_data),
251 ]
252
253
254 def eq(o, i):
255 """ makes signals equal: a helper routine which identifies if it is being
256 passed a list (or tuple) of objects, or signals, or Records, and calls
257 the objects' eq function.
258
259 complex objects (classes) can be used: they must follow the
260 convention of having an eq member function, which takes the
261 responsibility of further calling eq and returning a list of
262 eq assignments
263
264 Record is a special (unusual, recursive) case, where the input may be
265 specified as a dictionary (which may contain further dictionaries,
266 recursively), where the field names of the dictionary must match
267 the Record's field spec. Alternatively, an object with the same
268 member names as the Record may be assigned: it does not have to
269 *be* a Record.
270
271 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
272 has an eq function, the object being assigned to it (e.g. a python
273 object) might not. despite the *input* having an eq function,
274 that doesn't help us, because it's the *ArrayProxy* that's being
275 assigned to. so.... we cheat. use the ports() function of the
276 python object, enumerate them, find out the list of Signals that way,
277 and assign them.
278 """
279 res = []
280 if isinstance(o, dict):
281 for (k, v) in o.items():
282 print ("d-eq", v, i[k])
283 res.append(v.eq(i[k]))
284 return res
285
286 if not isinstance(o, Sequence):
287 o, i = [o], [i]
288 for (ao, ai) in zip(o, i):
289 #print ("eq", ao, ai)
290 if isinstance(ao, Record):
291 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
292 if isinstance(field_shape, Layout):
293 val = ai.fields
294 else:
295 val = ai
296 if hasattr(val, field_name): # check for attribute
297 val = getattr(val, field_name)
298 else:
299 val = val[field_name] # dictionary-style specification
300 rres = eq(ao.fields[field_name], val)
301 res += rres
302 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
303 for p in ai.ports():
304 op = getattr(ao, p.name)
305 #print (op, p, p.name)
306 rres = op.eq(p)
307 if not isinstance(rres, Sequence):
308 rres = [rres]
309 res += rres
310 else:
311 rres = ao.eq(ai)
312 if not isinstance(rres, Sequence):
313 rres = [rres]
314 res += rres
315 return res
316
317
318 class StageCls(metaclass=ABCMeta):
319 """ Class-based "Stage" API. requires instantiation (after derivation)
320
321 see "Stage API" above.. Note: python does *not* require derivation
322 from this class. All that is required is that the pipelines *have*
323 the functions listed in this class. Derivation from this class
324 is therefore merely a "courtesy" to maintainers.
325 """
326 @abstractmethod
327 def ispec(self): pass # REQUIRED
328 @abstractmethod
329 def ospec(self): pass # REQUIRED
330 #@abstractmethod
331 #def setup(self, m, i): pass # OPTIONAL
332 @abstractmethod
333 def process(self, i): pass # REQUIRED
334
335
336 class Stage(metaclass=ABCMeta):
337 """ Static "Stage" API. does not require instantiation (after derivation)
338
339 see "Stage API" above. Note: python does *not* require derivation
340 from this class. All that is required is that the pipelines *have*
341 the functions listed in this class. Derivation from this class
342 is therefore merely a "courtesy" to maintainers.
343 """
344 @staticmethod
345 @abstractmethod
346 def ispec(): pass
347
348 @staticmethod
349 @abstractmethod
350 def ospec(): pass
351
352 #@staticmethod
353 #@abstractmethod
354 #def setup(m, i): pass
355
356 @staticmethod
357 @abstractmethod
358 def process(i): pass
359
360
361 class RecordBasedStage(Stage):
362 """ convenience class which provides a Records-based layout.
363 honestly it's a lot easier just to create a direct Records-based
364 class (see ExampleAddRecordStage)
365 """
366 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
367 self.in_shape = in_shape
368 self.out_shape = out_shape
369 self.__process = processfn
370 self.__setup = setupfn
371 def ispec(self): return Record(self.in_shape)
372 def ospec(self): return Record(self.out_shape)
373 def process(seif, i): return self.__process(i)
374 def setup(seif, m, i): return self.__setup(m, i)
375
376
377 class StageChain(StageCls):
378 """ pass in a list of stages, and they will automatically be
379 chained together via their input and output specs into a
380 combinatorial chain.
381
382 the end result basically conforms to the exact same Stage API.
383
384 * input to this class will be the input of the first stage
385 * output of first stage goes into input of second
386 * output of second goes into input into third (etc. etc.)
387 * the output of this class will be the output of the last stage
388 """
389 def __init__(self, chain, specallocate=False):
390 self.chain = chain
391 self.specallocate = specallocate
392
393 def ispec(self):
394 return self.chain[0].ispec()
395
396 def ospec(self):
397 return self.chain[-1].ospec()
398
399 def setup(self, m, i):
400 for (idx, c) in enumerate(self.chain):
401 if hasattr(c, "setup"):
402 c.setup(m, i) # stage may have some module stuff
403 if self.specallocate:
404 o = self.chain[idx].ospec() # last assignment survives
405 m.d.comb += eq(o, c.process(i)) # process input into "o"
406 else:
407 o = c.process(i) # store input into "o"
408 if idx != len(self.chain)-1:
409 if self.specallocate:
410 ni = self.chain[idx+1].ispec() # new input on next loop
411 m.d.comb += eq(ni, o) # assign to next input
412 i = ni
413 else:
414 i = o
415 self.o = o # last loop is the output
416
417 def process(self, i):
418 return self.o # conform to Stage API: return last-loop output
419
420
421 class ControlBase:
422 """ Common functions for Pipeline API
423 """
424 def __init__(self, in_multi=None, stage_ctl=False):
425 """ Base class containing ready/valid/data to previous and next stages
426
427 * p: contains ready/valid to the previous stage
428 * n: contains ready/valid to the next stage
429
430 Except when calling Controlbase.connect(), user must also:
431 * add i_data member to PrevControl (p) and
432 * add o_data member to NextControl (n)
433 """
434 # set up input and output IO ACK (prev/next ready/valid)
435 self.p = PrevControl(in_multi, stage_ctl)
436 self.n = NextControl(stage_ctl)
437
438 def connect_to_next(self, nxt):
439 """ helper function to connect to the next stage data/valid/ready.
440 """
441 return self.n.connect_to_next(nxt.p)
442
443 def _connect_in(self, prev):
444 """ internal helper function to connect stage to an input source.
445 do not use to connect stage-to-stage!
446 """
447 return self.p._connect_in(prev.p)
448
449 def _connect_out(self, nxt):
450 """ internal helper function to connect stage to an output source.
451 do not use to connect stage-to-stage!
452 """
453 return self.n._connect_out(nxt.n)
454
455 def connect(self, pipechain):
456 """ connects a chain (list) of Pipeline instances together and
457 links them to this ControlBase instance:
458
459 in <----> self <---> out
460 | ^
461 v |
462 [pipe1, pipe2, pipe3, pipe4]
463 | ^ | ^ | ^
464 v | v | v |
465 out---in out--in out---in
466
467 Also takes care of allocating i_data/o_data, by looking up
468 the data spec for each end of the pipechain. i.e It is NOT
469 necessary to allocate self.p.i_data or self.n.o_data manually:
470 this is handled AUTOMATICALLY, here.
471
472 Basically this function is the direct equivalent of StageChain,
473 except that unlike StageChain, the Pipeline logic is followed.
474
475 Just as StageChain presents an object that conforms to the
476 Stage API from a list of objects that also conform to the
477 Stage API, an object that calls this Pipeline connect function
478 has the exact same pipeline API as the list of pipline objects
479 it is called with.
480
481 Thus it becomes possible to build up larger chains recursively.
482 More complex chains (multi-input, multi-output) will have to be
483 done manually.
484 """
485 eqs = [] # collated list of assignment statements
486
487 # connect inter-chain
488 for i in range(len(pipechain)-1):
489 pipe1 = pipechain[i]
490 pipe2 = pipechain[i+1]
491 eqs += pipe1.connect_to_next(pipe2)
492
493 # connect front of chain to ourselves
494 front = pipechain[0]
495 self.p.i_data = front.stage.ispec()
496 eqs += front._connect_in(self)
497
498 # connect end of chain to ourselves
499 end = pipechain[-1]
500 self.n.o_data = end.stage.ospec()
501 eqs += end._connect_out(self)
502
503 return eqs
504
505 def set_input(self, i):
506 """ helper function to set the input data
507 """
508 return eq(self.p.i_data, i)
509
510 def ports(self):
511 res = [self.p.i_valid, self.n.i_ready,
512 self.n.o_valid, self.p.o_ready,
513 ]
514 if hasattr(self.p.i_data, "ports"):
515 res += self.p.i_data.ports()
516 else:
517 res += self.p.i_data
518 if hasattr(self.n.o_data, "ports"):
519 res += self.n.o_data.ports()
520 else:
521 res += self.n.o_data
522 return res
523
524 def _elaborate(self, platform):
525 """ handles case where stage has dynamic ready/valid functions
526 """
527 m = Module()
528 if not self.p.stage_ctl:
529 return m
530
531 # intercept the previous (outgoing) "ready", combine with stage ready
532 m.d.comb += self.p.s_o_ready.eq(self.p._o_ready & self.stage.p_o_ready)
533
534 # intercept the next (incoming) "ready" and combine it with data valid
535 m.d.comb += self.n.d_valid.eq(self.n.i_ready & self.stage.d_valid)
536
537 return m
538
539
540 class BufferedPipeline(ControlBase):
541 """ buffered pipeline stage. data and strobe signals travel in sync.
542 if ever the input is ready and the output is not, processed data
543 is shunted in a temporary register.
544
545 Argument: stage. see Stage API above
546
547 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
548 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
549 stage-1 p.i_data >>in stage n.o_data out>> stage+1
550 | |
551 process --->----^
552 | |
553 +-- r_data ->-+
554
555 input data p.i_data is read (only), is processed and goes into an
556 intermediate result store [process()]. this is updated combinatorially.
557
558 in a non-stall condition, the intermediate result will go into the
559 output (update_output). however if ever there is a stall, it goes
560 into r_data instead [update_buffer()].
561
562 when the non-stall condition is released, r_data is the first
563 to be transferred to the output [flush_buffer()], and the stall
564 condition cleared.
565
566 on the next cycle (as long as stall is not raised again) the
567 input may begin to be processed and transferred directly to output.
568
569 """
570 def __init__(self, stage, stage_ctl=False):
571 ControlBase.__init__(self, stage_ctl=stage_ctl)
572 self.stage = stage
573
574 # set up the input and output data
575 self.p.i_data = stage.ispec() # input type
576 self.n.o_data = stage.ospec()
577
578 def elaborate(self, platform):
579
580 self.m = ControlBase._elaborate(self, platform)
581
582 result = self.stage.ospec()
583 r_data = self.stage.ospec()
584 if hasattr(self.stage, "setup"):
585 self.stage.setup(self.m, self.p.i_data)
586
587 # establish some combinatorial temporaries
588 o_n_validn = Signal(reset_less=True)
589 i_p_valid_o_p_ready = Signal(reset_less=True)
590 p_i_valid = Signal(reset_less=True)
591 self.m.d.comb += [p_i_valid.eq(self.p.i_valid_test),
592 o_n_validn.eq(~self.n.o_valid),
593 i_p_valid_o_p_ready.eq(p_i_valid & self.p.o_ready),
594 ]
595
596 # store result of processing in combinatorial temporary
597 self.m.d.comb += eq(result, self.stage.process(self.p.i_data))
598
599 # if not in stall condition, update the temporary register
600 with self.m.If(self.p.o_ready): # not stalled
601 self.m.d.sync += eq(r_data, result) # update buffer
602
603 with self.m.If(self.n.i_ready_test): # next stage is ready
604 with self.m.If(self.p._o_ready): # not stalled
605 # nothing in buffer: send (processed) input direct to output
606 self.m.d.sync += [self.n.o_valid.eq(p_i_valid),
607 eq(self.n.o_data, result), # update output
608 ]
609 with self.m.Else(): # p.o_ready is false, and something in buffer
610 # Flush the [already processed] buffer to the output port.
611 self.m.d.sync += [self.n.o_valid.eq(1), # reg empty
612 eq(self.n.o_data, r_data), # flush buffer
613 self.p._o_ready.eq(1), # clear stall
614 ]
615 # ignore input, since p.o_ready is also false.
616
617 # (n.i_ready) is false here: next stage is ready
618 with self.m.Elif(o_n_validn): # next stage being told "ready"
619 self.m.d.sync += [self.n.o_valid.eq(p_i_valid),
620 self.p._o_ready.eq(1), # Keep the buffer empty
621 eq(self.n.o_data, result), # set output data
622 ]
623
624 # (n.i_ready) false and (n.o_valid) true:
625 with self.m.Elif(i_p_valid_o_p_ready):
626 # If next stage *is* ready, and not stalled yet, accept input
627 self.m.d.sync += self.p._o_ready.eq(~(p_i_valid & self.n.o_valid))
628
629 return self.m
630
631
632 class UnbufferedPipeline(ControlBase):
633 """ A simple pipeline stage with single-clock synchronisation
634 and two-way valid/ready synchronised signalling.
635
636 Note that a stall in one stage will result in the entire pipeline
637 chain stalling.
638
639 Also that unlike BufferedPipeline, the valid/ready signalling does NOT
640 travel synchronously with the data: the valid/ready signalling
641 combines in a *combinatorial* fashion. Therefore, a long pipeline
642 chain will lengthen propagation delays.
643
644 Argument: stage. see Stage API, above
645
646 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
647 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
648 stage-1 p.i_data >>in stage n.o_data out>> stage+1
649 | |
650 r_data result
651 | |
652 +--process ->-+
653
654 Attributes:
655 -----------
656 p.i_data : StageInput, shaped according to ispec
657 The pipeline input
658 p.o_data : StageOutput, shaped according to ospec
659 The pipeline output
660 r_data : input_shape according to ispec
661 A temporary (buffered) copy of a prior (valid) input.
662 This is HELD if the output is not ready. It is updated
663 SYNCHRONOUSLY.
664 result: output_shape according to ospec
665 The output of the combinatorial logic. it is updated
666 COMBINATORIALLY (no clock dependence).
667 """
668
669 def __init__(self, stage, stage_ctl=False):
670 ControlBase.__init__(self, stage_ctl=stage_ctl)
671 self.stage = stage
672
673 # set up the input and output data
674 self.p.i_data = stage.ispec() # input type
675 self.n.o_data = stage.ospec() # output type
676
677 def elaborate(self, platform):
678 self.m = ControlBase._elaborate(self, platform)
679
680 data_valid = Signal() # is data valid or not
681 r_data = self.stage.ispec() # input type
682 if hasattr(self.stage, "setup"):
683 self.stage.setup(self.m, r_data)
684
685 # some temporaries
686 p_i_valid = Signal(reset_less=True)
687 pv = Signal(reset_less=True)
688 self.m.d.comb += p_i_valid.eq(self.p.i_valid_test)
689 self.m.d.comb += pv.eq(self.p.i_valid & self.p.o_ready)
690
691 self.m.d.comb += self.n.o_valid.eq(data_valid)
692 self.m.d.comb += self.p._o_ready.eq(~data_valid | self.n.i_ready_test)
693 self.m.d.sync += data_valid.eq(p_i_valid | \
694 (~self.n.i_ready_test & data_valid))
695 with self.m.If(pv):
696 self.m.d.sync += eq(r_data, self.p.i_data)
697 self.m.d.comb += eq(self.n.o_data, self.stage.process(r_data))
698 return self.m
699
700
701 class PassThroughStage(StageCls):
702 """ a pass-through stage which has its input data spec equal to its output,
703 and "passes through" its data from input to output.
704 """
705 def __init__(self, iospecfn):
706 self.iospecfn = iospecfn
707 def ispec(self): return self.iospecfn()
708 def ospec(self): return self.iospecfn()
709 def process(self, i): return i
710
711
712 class RegisterPipeline(UnbufferedPipeline):
713 """ A pipeline stage that delays by one clock cycle, creating a
714 sync'd latch out of o_data and o_valid as an indirect byproduct
715 of using PassThroughStage
716 """
717 def __init__(self, iospecfn):
718 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
719