add experiment override of i_ready test
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedPipeline implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 eq:
5 --
6
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
10 Records.
11
12 Stage API:
13 ---------
14
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
18
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
23 sub-objects
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
29 function
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
36
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
41
42 StageChain:
43 ----------
44
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
48
49 RecordBasedStage:
50 ----------------
51
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
56 examples).
57
58 PassThroughStage:
59 ----------------
60
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
64
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
68
69 ControlBase:
70 -----------
71
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
75 ready/valid/data API.
76
77 UnbufferedPipeline:
78 ------------------
79
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedPipeline). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
84
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedPipeline by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
88 also having to stall.
89
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
93
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
97
98 RegisterPipeline:
99 ----------------
100
101 A convenience class that, because UnbufferedPipeline introduces a single
102 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
103 stage that, duh, delays its (unmodified) input by one clock cycle.
104
105 BufferedPipeline:
106 ----------------
107
108 nmigen implementation of buffered pipeline stage, based on zipcpu:
109 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
110
111 this module requires quite a bit of thought to understand how it works
112 (and why it is needed in the first place). reading the above is
113 *strongly* recommended.
114
115 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
116 the STB / ACK signals to raise and lower (on separate clocks) before
117 data may proceeed (thus only allowing one piece of data to proceed
118 on *ALTERNATE* cycles), the signalling here is a true pipeline
119 where data will flow on *every* clock when the conditions are right.
120
121 input acceptance conditions are when:
122 * incoming previous-stage strobe (p.i_valid) is HIGH
123 * outgoing previous-stage ready (p.o_ready) is LOW
124
125 output transmission conditions are when:
126 * outgoing next-stage strobe (n.o_valid) is HIGH
127 * outgoing next-stage ready (n.i_ready) is LOW
128
129 the tricky bit is when the input has valid data and the output is not
130 ready to accept it. if it wasn't for the clock synchronisation, it
131 would be possible to tell the input "hey don't send that data, we're
132 not ready". unfortunately, it's not possible to "change the past":
133 the previous stage *has no choice* but to pass on its data.
134
135 therefore, the incoming data *must* be accepted - and stored: that
136 is the responsibility / contract that this stage *must* accept.
137 on the same clock, it's possible to tell the input that it must
138 not send any more data. this is the "stall" condition.
139
140 we now effectively have *two* possible pieces of data to "choose" from:
141 the buffered data, and the incoming data. the decision as to which
142 to process and output is based on whether we are in "stall" or not.
143 i.e. when the next stage is no longer ready, the output comes from
144 the buffer if a stall had previously occurred, otherwise it comes
145 direct from processing the input.
146
147 this allows us to respect a synchronous "travelling STB" with what
148 dan calls a "buffered handshake".
149
150 it's quite a complex state machine!
151 """
152
153 from nmigen import Signal, Cat, Const, Mux, Module, Value
154 from nmigen.cli import verilog, rtlil
155 from nmigen.hdl.ast import ArrayProxy
156 from nmigen.hdl.rec import Record, Layout
157
158 from abc import ABCMeta, abstractmethod
159 from collections.abc import Sequence
160
161
162 class PrevControl:
163 """ contains signals that come *from* the previous stage (both in and out)
164 * i_valid: previous stage indicating all incoming data is valid.
165 may be a multi-bit signal, where all bits are required
166 to be asserted to indicate "valid".
167 * o_ready: output to next stage indicating readiness to accept data
168 * i_data : an input - added by the user of this class
169 """
170
171 def __init__(self, i_width=1, stage_ctl=False):
172 self.stage_ctl = stage_ctl
173 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
174 self._o_ready = Signal(name="p_o_ready") # prev <<out self
175 self.i_data = None # XXX MUST BE ADDED BY USER
176 if stage_ctl:
177 self.s_o_ready = Signal(name="p_s_o_rdy") # prev <<out self
178
179 @property
180 def o_ready(self):
181 """ public-facing API: indicates (externally) that stage is ready
182 """
183 if self.stage_ctl:
184 return self.s_o_ready # set dynamically by stage
185 return self._o_ready # return this when not under dynamic control
186
187 def _connect_in(self, prev):
188 """ internal helper function to connect stage to an input source.
189 do not use to connect stage-to-stage!
190 """
191 return [self.i_valid.eq(prev.i_valid),
192 prev.o_ready.eq(self.o_ready),
193 eq(self.i_data, prev.i_data),
194 ]
195
196 def i_valid_logic(self):
197 vlen = len(self.i_valid)
198 if vlen > 1:
199 # multi-bit case: valid only when i_valid is all 1s
200 all1s = Const(-1, (len(self.i_valid), False))
201 i_valid = (self.i_valid == all1s)
202 else:
203 # single-bit i_valid case
204 i_valid = self.i_valid
205
206 # when stage indicates not ready, incoming data
207 # must "appear" to be not ready too
208 if self.stage_ctl:
209 i_valid = i_valid & self.s_o_ready
210
211 return i_valid
212
213
214 class NextControl:
215 """ contains the signals that go *to* the next stage (both in and out)
216 * o_valid: output indicating to next stage that data is valid
217 * i_ready: input from next stage indicating that it can accept data
218 * o_data : an output - added by the user of this class
219 """
220 def __init__(self, stage_ctl=False):
221 self.stage_ctl = stage_ctl
222 self._o_valid = Signal(name="n_o_valid") # self out>> next
223 self.i_ready = Signal(name="n_i_ready") # self <<in next
224 self.o_data = None # XXX MUST BE ADDED BY USER
225 if stage_ctl:
226 self.s_o_valid = Signal(name="n_s_o_vld") # self out>> next
227
228 @property
229 def o_valid(self):
230 """ public-facing API: indicates (externally) that data is valid
231 """
232 if self.stage_ctl:
233 return self.s_o_valid
234 return self._o_valid
235
236 def i_ready_logic(self):
237 """ public-facing API: receives indication that transmit is possible
238 """
239 if self.stage_ctl:
240 return self.i_ready & self.s_o_valid
241 return self.i_ready
242
243 def connect_to_next(self, nxt):
244 """ helper function to connect to the next stage data/valid/ready.
245 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
246 use this when connecting stage-to-stage
247 """
248 return [nxt.i_valid.eq(self.o_valid),
249 self.i_ready.eq(nxt.o_ready),
250 eq(nxt.i_data, self.o_data),
251 ]
252
253 def _connect_out(self, nxt):
254 """ internal helper function to connect stage to an output source.
255 do not use to connect stage-to-stage!
256 """
257 return [nxt.o_valid.eq(self.o_valid),
258 self.i_ready.eq(nxt.i_ready),
259 eq(nxt.o_data, self.o_data),
260 ]
261
262
263 def eq(o, i):
264 """ makes signals equal: a helper routine which identifies if it is being
265 passed a list (or tuple) of objects, or signals, or Records, and calls
266 the objects' eq function.
267
268 complex objects (classes) can be used: they must follow the
269 convention of having an eq member function, which takes the
270 responsibility of further calling eq and returning a list of
271 eq assignments
272
273 Record is a special (unusual, recursive) case, where the input may be
274 specified as a dictionary (which may contain further dictionaries,
275 recursively), where the field names of the dictionary must match
276 the Record's field spec. Alternatively, an object with the same
277 member names as the Record may be assigned: it does not have to
278 *be* a Record.
279
280 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
281 has an eq function, the object being assigned to it (e.g. a python
282 object) might not. despite the *input* having an eq function,
283 that doesn't help us, because it's the *ArrayProxy* that's being
284 assigned to. so.... we cheat. use the ports() function of the
285 python object, enumerate them, find out the list of Signals that way,
286 and assign them.
287 """
288 res = []
289 if isinstance(o, dict):
290 for (k, v) in o.items():
291 print ("d-eq", v, i[k])
292 res.append(v.eq(i[k]))
293 return res
294
295 if not isinstance(o, Sequence):
296 o, i = [o], [i]
297 for (ao, ai) in zip(o, i):
298 #print ("eq", ao, ai)
299 if isinstance(ao, Record):
300 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
301 if isinstance(field_shape, Layout):
302 val = ai.fields
303 else:
304 val = ai
305 if hasattr(val, field_name): # check for attribute
306 val = getattr(val, field_name)
307 else:
308 val = val[field_name] # dictionary-style specification
309 rres = eq(ao.fields[field_name], val)
310 res += rres
311 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
312 for p in ai.ports():
313 op = getattr(ao, p.name)
314 #print (op, p, p.name)
315 rres = op.eq(p)
316 if not isinstance(rres, Sequence):
317 rres = [rres]
318 res += rres
319 else:
320 rres = ao.eq(ai)
321 if not isinstance(rres, Sequence):
322 rres = [rres]
323 res += rres
324 return res
325
326
327 class StageCls(metaclass=ABCMeta):
328 """ Class-based "Stage" API. requires instantiation (after derivation)
329
330 see "Stage API" above.. Note: python does *not* require derivation
331 from this class. All that is required is that the pipelines *have*
332 the functions listed in this class. Derivation from this class
333 is therefore merely a "courtesy" to maintainers.
334 """
335 @abstractmethod
336 def ispec(self): pass # REQUIRED
337 @abstractmethod
338 def ospec(self): pass # REQUIRED
339 #@abstractmethod
340 #def setup(self, m, i): pass # OPTIONAL
341 @abstractmethod
342 def process(self, i): pass # REQUIRED
343
344
345 class Stage(metaclass=ABCMeta):
346 """ Static "Stage" API. does not require instantiation (after derivation)
347
348 see "Stage API" above. Note: python does *not* require derivation
349 from this class. All that is required is that the pipelines *have*
350 the functions listed in this class. Derivation from this class
351 is therefore merely a "courtesy" to maintainers.
352 """
353 @staticmethod
354 @abstractmethod
355 def ispec(): pass
356
357 @staticmethod
358 @abstractmethod
359 def ospec(): pass
360
361 #@staticmethod
362 #@abstractmethod
363 #def setup(m, i): pass
364
365 @staticmethod
366 @abstractmethod
367 def process(i): pass
368
369
370 class RecordBasedStage(Stage):
371 """ convenience class which provides a Records-based layout.
372 honestly it's a lot easier just to create a direct Records-based
373 class (see ExampleAddRecordStage)
374 """
375 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
376 self.in_shape = in_shape
377 self.out_shape = out_shape
378 self.__process = processfn
379 self.__setup = setupfn
380 def ispec(self): return Record(self.in_shape)
381 def ospec(self): return Record(self.out_shape)
382 def process(seif, i): return self.__process(i)
383 def setup(seif, m, i): return self.__setup(m, i)
384
385
386 class StageChain(StageCls):
387 """ pass in a list of stages, and they will automatically be
388 chained together via their input and output specs into a
389 combinatorial chain.
390
391 the end result basically conforms to the exact same Stage API.
392
393 * input to this class will be the input of the first stage
394 * output of first stage goes into input of second
395 * output of second goes into input into third (etc. etc.)
396 * the output of this class will be the output of the last stage
397 """
398 def __init__(self, chain, specallocate=False):
399 self.chain = chain
400 self.specallocate = specallocate
401
402 def ispec(self):
403 return self.chain[0].ispec()
404
405 def ospec(self):
406 return self.chain[-1].ospec()
407
408 def setup(self, m, i):
409 for (idx, c) in enumerate(self.chain):
410 if hasattr(c, "setup"):
411 c.setup(m, i) # stage may have some module stuff
412 if self.specallocate:
413 o = self.chain[idx].ospec() # last assignment survives
414 m.d.comb += eq(o, c.process(i)) # process input into "o"
415 else:
416 o = c.process(i) # store input into "o"
417 if idx != len(self.chain)-1:
418 if self.specallocate:
419 ni = self.chain[idx+1].ispec() # new input on next loop
420 m.d.comb += eq(ni, o) # assign to next input
421 i = ni
422 else:
423 i = o
424 self.o = o # last loop is the output
425
426 def process(self, i):
427 return self.o # conform to Stage API: return last-loop output
428
429
430 class ControlBase:
431 """ Common functions for Pipeline API
432 """
433 def __init__(self, in_multi=None, stage_ctl=False):
434 """ Base class containing ready/valid/data to previous and next stages
435
436 * p: contains ready/valid to the previous stage
437 * n: contains ready/valid to the next stage
438
439 Except when calling Controlbase.connect(), user must also:
440 * add i_data member to PrevControl (p) and
441 * add o_data member to NextControl (n)
442 """
443 # set up input and output IO ACK (prev/next ready/valid)
444 self.p = PrevControl(in_multi, stage_ctl)
445 self.n = NextControl(stage_ctl)
446
447 def connect_to_next(self, nxt):
448 """ helper function to connect to the next stage data/valid/ready.
449 """
450 return self.n.connect_to_next(nxt.p)
451
452 def _connect_in(self, prev):
453 """ internal helper function to connect stage to an input source.
454 do not use to connect stage-to-stage!
455 """
456 return self.p._connect_in(prev.p)
457
458 def _connect_out(self, nxt):
459 """ internal helper function to connect stage to an output source.
460 do not use to connect stage-to-stage!
461 """
462 return self.n._connect_out(nxt.n)
463
464 def connect(self, pipechain):
465 """ connects a chain (list) of Pipeline instances together and
466 links them to this ControlBase instance:
467
468 in <----> self <---> out
469 | ^
470 v |
471 [pipe1, pipe2, pipe3, pipe4]
472 | ^ | ^ | ^
473 v | v | v |
474 out---in out--in out---in
475
476 Also takes care of allocating i_data/o_data, by looking up
477 the data spec for each end of the pipechain. i.e It is NOT
478 necessary to allocate self.p.i_data or self.n.o_data manually:
479 this is handled AUTOMATICALLY, here.
480
481 Basically this function is the direct equivalent of StageChain,
482 except that unlike StageChain, the Pipeline logic is followed.
483
484 Just as StageChain presents an object that conforms to the
485 Stage API from a list of objects that also conform to the
486 Stage API, an object that calls this Pipeline connect function
487 has the exact same pipeline API as the list of pipline objects
488 it is called with.
489
490 Thus it becomes possible to build up larger chains recursively.
491 More complex chains (multi-input, multi-output) will have to be
492 done manually.
493 """
494 eqs = [] # collated list of assignment statements
495
496 # connect inter-chain
497 for i in range(len(pipechain)-1):
498 pipe1 = pipechain[i]
499 pipe2 = pipechain[i+1]
500 eqs += pipe1.connect_to_next(pipe2)
501
502 # connect front of chain to ourselves
503 front = pipechain[0]
504 self.p.i_data = front.stage.ispec()
505 eqs += front._connect_in(self)
506
507 # connect end of chain to ourselves
508 end = pipechain[-1]
509 self.n.o_data = end.stage.ospec()
510 eqs += end._connect_out(self)
511
512 return eqs
513
514 def set_input(self, i):
515 """ helper function to set the input data
516 """
517 return eq(self.p.i_data, i)
518
519 def ports(self):
520 res = [self.p.i_valid, self.n.i_ready,
521 self.n.o_valid, self.p.o_ready,
522 ]
523 if hasattr(self.p.i_data, "ports"):
524 res += self.p.i_data.ports()
525 else:
526 res += self.p.i_data
527 if hasattr(self.n.o_data, "ports"):
528 res += self.n.o_data.ports()
529 else:
530 res += self.n.o_data
531 return res
532
533 def _elaborate(self, platform):
534 """ handles case where stage has dynamic ready/valid functions
535 """
536 m = Module()
537 if not self.n.stage_ctl:
538 return m
539
540 # when the pipeline (buffered or otherwise) says "ready",
541 # test the *stage* "ready".
542
543 with m.If(self.p._o_ready):
544 m.d.comb += self.p.s_o_ready.eq(self.stage.p_o_ready)
545 with m.Else():
546 m.d.comb += self.p.s_o_ready.eq(0)
547
548 # when the pipeline (buffered or otherwise) says "valid",
549 # test the *stage* "valid".
550 with m.If(self.n._o_valid):
551 m.d.comb += self.n.s_o_valid.eq(self.stage.n_o_valid)
552 with m.Else():
553 m.d.comb += self.n.s_o_valid.eq(0)
554 return m
555
556
557 class BufferedPipeline(ControlBase):
558 """ buffered pipeline stage. data and strobe signals travel in sync.
559 if ever the input is ready and the output is not, processed data
560 is shunted in a temporary register.
561
562 Argument: stage. see Stage API above
563
564 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
565 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
566 stage-1 p.i_data >>in stage n.o_data out>> stage+1
567 | |
568 process --->----^
569 | |
570 +-- r_data ->-+
571
572 input data p.i_data is read (only), is processed and goes into an
573 intermediate result store [process()]. this is updated combinatorially.
574
575 in a non-stall condition, the intermediate result will go into the
576 output (update_output). however if ever there is a stall, it goes
577 into r_data instead [update_buffer()].
578
579 when the non-stall condition is released, r_data is the first
580 to be transferred to the output [flush_buffer()], and the stall
581 condition cleared.
582
583 on the next cycle (as long as stall is not raised again) the
584 input may begin to be processed and transferred directly to output.
585
586 """
587 def __init__(self, stage, stage_ctl=False):
588 ControlBase.__init__(self, stage_ctl=stage_ctl)
589 self.stage = stage
590
591 # set up the input and output data
592 self.p.i_data = stage.ispec() # input type
593 self.n.o_data = stage.ospec()
594
595 def elaborate(self, platform):
596
597 self.m = ControlBase._elaborate(self, platform)
598
599 result = self.stage.ospec()
600 r_data = self.stage.ospec()
601 if hasattr(self.stage, "setup"):
602 self.stage.setup(self.m, self.p.i_data)
603
604 # establish some combinatorial temporaries
605 o_n_validn = Signal(reset_less=True)
606 i_p_valid_o_p_ready = Signal(reset_less=True)
607 p_i_valid = Signal(reset_less=True)
608 self.m.d.comb += [p_i_valid.eq(self.p.i_valid_logic()),
609 o_n_validn.eq(~self.n.o_valid),
610 i_p_valid_o_p_ready.eq(p_i_valid & self.p.o_ready),
611 ]
612
613 # store result of processing in combinatorial temporary
614 self.m.d.comb += eq(result, self.stage.process(self.p.i_data))
615
616 # if not in stall condition, update the temporary register
617 with self.m.If(self.p.o_ready): # not stalled
618 self.m.d.sync += eq(r_data, result) # update buffer
619
620 with self.m.If(self.n.i_ready): # next stage is ready
621 with self.m.If(self.p._o_ready): # not stalled
622 # nothing in buffer: send (processed) input direct to output
623 self.m.d.sync += [self.n._o_valid.eq(p_i_valid),
624 eq(self.n.o_data, result), # update output
625 ]
626 with self.m.Else(): # p.o_ready is false, and something in buffer
627 # Flush the [already processed] buffer to the output port.
628 self.m.d.sync += [self.n._o_valid.eq(1), # reg empty
629 eq(self.n.o_data, r_data), # flush buffer
630 self.p._o_ready.eq(1), # clear stall
631 ]
632 # ignore input, since p.o_ready is also false.
633
634 # (n.i_ready) is false here: next stage is ready
635 with self.m.Elif(o_n_validn): # next stage being told "ready"
636 self.m.d.sync += [self.n._o_valid.eq(p_i_valid),
637 self.p._o_ready.eq(1), # Keep the buffer empty
638 eq(self.n.o_data, result), # set output data
639 ]
640
641 # (n.i_ready) false and (n.o_valid) true:
642 with self.m.Elif(i_p_valid_o_p_ready):
643 # If next stage *is* ready, and not stalled yet, accept input
644 self.m.d.sync += self.p._o_ready.eq(~(p_i_valid & self.n.o_valid))
645
646 return self.m
647
648
649 class UnbufferedPipeline(ControlBase):
650 """ A simple pipeline stage with single-clock synchronisation
651 and two-way valid/ready synchronised signalling.
652
653 Note that a stall in one stage will result in the entire pipeline
654 chain stalling.
655
656 Also that unlike BufferedPipeline, the valid/ready signalling does NOT
657 travel synchronously with the data: the valid/ready signalling
658 combines in a *combinatorial* fashion. Therefore, a long pipeline
659 chain will lengthen propagation delays.
660
661 Argument: stage. see Stage API, above
662
663 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
664 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
665 stage-1 p.i_data >>in stage n.o_data out>> stage+1
666 | |
667 r_data result
668 | |
669 +--process ->-+
670
671 Attributes:
672 -----------
673 p.i_data : StageInput, shaped according to ispec
674 The pipeline input
675 p.o_data : StageOutput, shaped according to ospec
676 The pipeline output
677 r_data : input_shape according to ispec
678 A temporary (buffered) copy of a prior (valid) input.
679 This is HELD if the output is not ready. It is updated
680 SYNCHRONOUSLY.
681 result: output_shape according to ospec
682 The output of the combinatorial logic. it is updated
683 COMBINATORIALLY (no clock dependence).
684 """
685
686 def __init__(self, stage, stage_ctl=False):
687 ControlBase.__init__(self, stage_ctl=stage_ctl)
688 self.stage = stage
689
690 # set up the input and output data
691 self.p.i_data = stage.ispec() # input type
692 self.n.o_data = stage.ospec() # output type
693
694 def elaborate(self, platform):
695 self.m = ControlBase._elaborate(self, platform)
696
697 data_valid = Signal() # is data valid or not
698 r_data = self.stage.ispec() # input type
699 if hasattr(self.stage, "setup"):
700 self.stage.setup(self.m, r_data)
701
702 # some temporaries
703 p_i_valid = Signal(reset_less=True)
704 pv = Signal(reset_less=True)
705 self.m.d.comb += p_i_valid.eq(self.p.i_valid_logic())
706 self.m.d.comb += pv.eq(self.p.i_valid & self.p.o_ready)
707
708 self.m.d.comb += self.n._o_valid.eq(data_valid)
709 self.m.d.comb += self.p._o_ready.eq(~data_valid | self.n.i_ready)
710 self.m.d.sync += data_valid.eq(p_i_valid | \
711 (~self.n.i_ready & data_valid))
712 with self.m.If(pv):
713 self.m.d.sync += eq(r_data, self.p.i_data)
714 self.m.d.comb += eq(self.n.o_data, self.stage.process(r_data))
715 return self.m
716
717
718 class PassThroughStage(StageCls):
719 """ a pass-through stage which has its input data spec equal to its output,
720 and "passes through" its data from input to output.
721 """
722 def __init__(self, iospecfn):
723 self.iospecfn = iospecfn
724 def ispec(self): return self.iospecfn()
725 def ospec(self): return self.iospecfn()
726 def process(self, i): return i
727
728
729 class RegisterPipeline(UnbufferedPipeline):
730 """ A pipeline stage that delays by one clock cycle, creating a
731 sync'd latch out of o_data and o_valid as an indirect byproduct
732 of using PassThroughStage
733 """
734 def __init__(self, iospecfn):
735 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
736