add __iter__ to several classes, add global shape() function use in FIFOControl
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedHandshake implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 Associated development bugs:
5 * http://bugs.libre-riscv.org/show_bug.cgi?id=64
6 * http://bugs.libre-riscv.org/show_bug.cgi?id=57
7
8 eq:
9 --
10
11 a strategically very important function that is identical in function
12 to nmigen's Signal.eq function, except it may take objects, or a list
13 of objects, or a tuple of objects, and where objects may also be
14 Records.
15
16 Stage API:
17 ---------
18
19 stage requires compliance with a strict API that may be
20 implemented in several means, including as a static class.
21 the methods of a stage instance must be as follows:
22
23 * ispec() - Input data format specification
24 returns an object or a list or tuple of objects, or
25 a Record, each object having an "eq" function which
26 takes responsibility for copying by assignment all
27 sub-objects
28 * ospec() - Output data format specification
29 requirements as for ospec
30 * process(m, i) - Processes an ispec-formatted object
31 returns a combinatorial block of a result that
32 may be assigned to the output, by way of the "eq"
33 function
34 * setup(m, i) - Optional function for setting up submodules
35 may be used for more complex stages, to link
36 the input (i) to submodules. must take responsibility
37 for adding those submodules to the module (m).
38 the submodules must be combinatorial blocks and
39 must have their inputs and output linked combinatorially.
40
41 Both StageCls (for use with non-static classes) and Stage (for use
42 by static classes) are abstract classes from which, for convenience
43 and as a courtesy to other developers, anything conforming to the
44 Stage API may *choose* to derive.
45
46 StageChain:
47 ----------
48
49 A useful combinatorial wrapper around stages that chains them together
50 and then presents a Stage-API-conformant interface. By presenting
51 the same API as the stages it wraps, it can clearly be used recursively.
52
53 RecordBasedStage:
54 ----------------
55
56 A convenience class that takes an input shape, output shape, a
57 "processing" function and an optional "setup" function. Honestly
58 though, there's not much more effort to just... create a class
59 that returns a couple of Records (see ExampleAddRecordStage in
60 examples).
61
62 PassThroughStage:
63 ----------------
64
65 A convenience class that takes a single function as a parameter,
66 that is chain-called to create the exact same input and output spec.
67 It has a process() function that simply returns its input.
68
69 Instances of this class are completely redundant if handed to
70 StageChain, however when passed to UnbufferedPipeline they
71 can be used to introduce a single clock delay.
72
73 ControlBase:
74 -----------
75
76 The base class for pipelines. Contains previous and next ready/valid/data.
77 Also has an extremely useful "connect" function that can be used to
78 connect a chain of pipelines and present the exact same prev/next
79 ready/valid/data API.
80
81 UnbufferedPipeline:
82 ------------------
83
84 A simple stalling clock-synchronised pipeline that has no buffering
85 (unlike BufferedHandshake). Data flows on *every* clock cycle when
86 the conditions are right (this is nominally when the input is valid
87 and the output is ready).
88
89 A stall anywhere along the line will result in a stall back-propagating
90 down the entire chain. The BufferedHandshake by contrast will buffer
91 incoming data, allowing previous stages one clock cycle's grace before
92 also having to stall.
93
94 An advantage of the UnbufferedPipeline over the Buffered one is
95 that the amount of logic needed (number of gates) is greatly
96 reduced (no second set of buffers basically)
97
98 The disadvantage of the UnbufferedPipeline is that the valid/ready
99 logic, if chained together, is *combinatorial*, resulting in
100 progressively larger gate delay.
101
102 PassThroughHandshake:
103 ------------------
104
105 A Control class that introduces a single clock delay, passing its
106 data through unaltered. Unlike RegisterPipeline (which relies
107 on UnbufferedPipeline and PassThroughStage) it handles ready/valid
108 itself.
109
110 RegisterPipeline:
111 ----------------
112
113 A convenience class that, because UnbufferedPipeline introduces a single
114 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
115 stage that, duh, delays its (unmodified) input by one clock cycle.
116
117 BufferedHandshake:
118 ----------------
119
120 nmigen implementation of buffered pipeline stage, based on zipcpu:
121 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
122
123 this module requires quite a bit of thought to understand how it works
124 (and why it is needed in the first place). reading the above is
125 *strongly* recommended.
126
127 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
128 the STB / ACK signals to raise and lower (on separate clocks) before
129 data may proceeed (thus only allowing one piece of data to proceed
130 on *ALTERNATE* cycles), the signalling here is a true pipeline
131 where data will flow on *every* clock when the conditions are right.
132
133 input acceptance conditions are when:
134 * incoming previous-stage strobe (p.i_valid) is HIGH
135 * outgoing previous-stage ready (p.o_ready) is LOW
136
137 output transmission conditions are when:
138 * outgoing next-stage strobe (n.o_valid) is HIGH
139 * outgoing next-stage ready (n.i_ready) is LOW
140
141 the tricky bit is when the input has valid data and the output is not
142 ready to accept it. if it wasn't for the clock synchronisation, it
143 would be possible to tell the input "hey don't send that data, we're
144 not ready". unfortunately, it's not possible to "change the past":
145 the previous stage *has no choice* but to pass on its data.
146
147 therefore, the incoming data *must* be accepted - and stored: that
148 is the responsibility / contract that this stage *must* accept.
149 on the same clock, it's possible to tell the input that it must
150 not send any more data. this is the "stall" condition.
151
152 we now effectively have *two* possible pieces of data to "choose" from:
153 the buffered data, and the incoming data. the decision as to which
154 to process and output is based on whether we are in "stall" or not.
155 i.e. when the next stage is no longer ready, the output comes from
156 the buffer if a stall had previously occurred, otherwise it comes
157 direct from processing the input.
158
159 this allows us to respect a synchronous "travelling STB" with what
160 dan calls a "buffered handshake".
161
162 it's quite a complex state machine!
163
164 SimpleHandshake
165 ---------------
166
167 Synchronised pipeline, Based on:
168 https://github.com/ZipCPU/dbgbus/blob/master/hexbus/rtl/hbdeword.v
169 """
170
171 from nmigen import Signal, Cat, Const, Mux, Module, Value
172 from nmigen.cli import verilog, rtlil
173 from nmigen.lib.fifo import SyncFIFO, SyncFIFOBuffered
174 from nmigen.hdl.ast import ArrayProxy
175 from nmigen.hdl.rec import Record, Layout
176
177 from abc import ABCMeta, abstractmethod
178 from collections.abc import Sequence
179 from queue import Queue
180
181
182 class RecordObject(Record):
183 def __init__(self, layout=None, name=None):
184 Record.__init__(self, layout=layout or [], name=None)
185
186 def __setattr__(self, k, v):
187 if k in dir(Record) or "fields" not in self.__dict__:
188 return object.__setattr__(self, k, v)
189 self.fields[k] = v
190 if isinstance(v, Record):
191 newlayout = {k: (k, v.layout)}
192 else:
193 newlayout = {k: (k, v.shape())}
194 self.layout.fields.update(newlayout)
195
196 def __iter__(self):
197 for x in self.fields.values():
198 yield x
199
200
201 class PrevControl:
202 """ contains signals that come *from* the previous stage (both in and out)
203 * i_valid: previous stage indicating all incoming data is valid.
204 may be a multi-bit signal, where all bits are required
205 to be asserted to indicate "valid".
206 * o_ready: output to next stage indicating readiness to accept data
207 * i_data : an input - added by the user of this class
208 """
209
210 def __init__(self, i_width=1, stage_ctl=False):
211 self.stage_ctl = stage_ctl
212 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
213 self._o_ready = Signal(name="p_o_ready") # prev <<out self
214 self.i_data = None # XXX MUST BE ADDED BY USER
215 if stage_ctl:
216 self.s_o_ready = Signal(name="p_s_o_rdy") # prev <<out self
217 self.trigger = Signal(reset_less=True)
218
219 @property
220 def o_ready(self):
221 """ public-facing API: indicates (externally) that stage is ready
222 """
223 if self.stage_ctl:
224 return self.s_o_ready # set dynamically by stage
225 return self._o_ready # return this when not under dynamic control
226
227 def _connect_in(self, prev, direct=False, fn=None):
228 """ internal helper function to connect stage to an input source.
229 do not use to connect stage-to-stage!
230 """
231 i_valid = prev.i_valid if direct else prev.i_valid_test
232 i_data = fn(prev.i_data) if fn is not None else prev.i_data
233 return [self.i_valid.eq(i_valid),
234 prev.o_ready.eq(self.o_ready),
235 eq(self.i_data, i_data),
236 ]
237
238 @property
239 def i_valid_test(self):
240 vlen = len(self.i_valid)
241 if vlen > 1:
242 # multi-bit case: valid only when i_valid is all 1s
243 all1s = Const(-1, (len(self.i_valid), False))
244 i_valid = (self.i_valid == all1s)
245 else:
246 # single-bit i_valid case
247 i_valid = self.i_valid
248
249 # when stage indicates not ready, incoming data
250 # must "appear" to be not ready too
251 if self.stage_ctl:
252 i_valid = i_valid & self.s_o_ready
253
254 return i_valid
255
256 def elaborate(self, platform):
257 m = Module()
258 m.d.comb += self.trigger.eq(self.i_valid_test & self.o_ready)
259 return m
260
261 def eq(self, i):
262 return [self.i_data.eq(i.i_data),
263 self.o_ready.eq(i.o_ready),
264 self.i_valid.eq(i.i_valid)]
265
266 def __iter__(self):
267 yield self.i_valid
268 yield self.o_ready
269 if hasattr(self.i_data, "ports"):
270 yield from self.i_data.ports()
271 elif isinstance(self.i_data, Sequence):
272 yield from self.i_data
273 else:
274 yield self.i_data
275
276 def ports(self):
277 return list(self)
278
279
280 class NextControl:
281 """ contains the signals that go *to* the next stage (both in and out)
282 * o_valid: output indicating to next stage that data is valid
283 * i_ready: input from next stage indicating that it can accept data
284 * o_data : an output - added by the user of this class
285 """
286 def __init__(self, stage_ctl=False):
287 self.stage_ctl = stage_ctl
288 self.o_valid = Signal(name="n_o_valid") # self out>> next
289 self.i_ready = Signal(name="n_i_ready") # self <<in next
290 self.o_data = None # XXX MUST BE ADDED BY USER
291 #if self.stage_ctl:
292 self.d_valid = Signal(reset=1) # INTERNAL (data valid)
293 self.trigger = Signal(reset_less=True)
294
295 @property
296 def i_ready_test(self):
297 if self.stage_ctl:
298 return self.i_ready & self.d_valid
299 return self.i_ready
300
301 def connect_to_next(self, nxt):
302 """ helper function to connect to the next stage data/valid/ready.
303 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
304 use this when connecting stage-to-stage
305 """
306 return [nxt.i_valid.eq(self.o_valid),
307 self.i_ready.eq(nxt.o_ready),
308 eq(nxt.i_data, self.o_data),
309 ]
310
311 def _connect_out(self, nxt, direct=False, fn=None):
312 """ internal helper function to connect stage to an output source.
313 do not use to connect stage-to-stage!
314 """
315 i_ready = nxt.i_ready if direct else nxt.i_ready_test
316 o_data = fn(nxt.o_data) if fn is not None else nxt.o_data
317 return [nxt.o_valid.eq(self.o_valid),
318 self.i_ready.eq(i_ready),
319 eq(o_data, self.o_data),
320 ]
321
322 def elaborate(self, platform):
323 m = Module()
324 m.d.comb += self.trigger.eq(self.i_ready_test & self.o_valid)
325 return m
326
327 def __iter__(self):
328 yield self.i_ready
329 yield self.o_valid
330 if hasattr(self.o_data, "ports"):
331 yield from self.o_data.ports()
332 elif isinstance(self.o_data, Sequence):
333 yield from self.o_data
334 else:
335 yield self.o_data
336
337 def ports(self):
338 return list(self)
339
340
341 class Visitor2:
342 """ a helper class for iterating twin-argument compound data structures.
343
344 Record is a special (unusual, recursive) case, where the input may be
345 specified as a dictionary (which may contain further dictionaries,
346 recursively), where the field names of the dictionary must match
347 the Record's field spec. Alternatively, an object with the same
348 member names as the Record may be assigned: it does not have to
349 *be* a Record.
350
351 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
352 has an eq function, the object being assigned to it (e.g. a python
353 object) might not. despite the *input* having an eq function,
354 that doesn't help us, because it's the *ArrayProxy* that's being
355 assigned to. so.... we cheat. use the ports() function of the
356 python object, enumerate them, find out the list of Signals that way,
357 and assign them.
358 """
359 def iterator2(self, o, i):
360 if isinstance(o, dict):
361 yield from self.dict_iter2(o, i)
362
363 if not isinstance(o, Sequence):
364 o, i = [o], [i]
365 for (ao, ai) in zip(o, i):
366 #print ("visit", fn, ao, ai)
367 if isinstance(ao, Record):
368 yield from self.record_iter2(ao, ai)
369 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
370 yield from self.arrayproxy_iter2(ao, ai)
371 else:
372 yield (ao, ai)
373
374 def dict_iter2(self, o, i):
375 for (k, v) in o.items():
376 print ("d-iter", v, i[k])
377 yield (v, i[k])
378 return res
379
380 def _not_quite_working_with_all_unit_tests_record_iter2(self, ao, ai):
381 print ("record_iter2", ao, ai, type(ao), type(ai))
382 if isinstance(ai, Value):
383 if isinstance(ao, Sequence):
384 ao, ai = [ao], [ai]
385 for o, i in zip(ao, ai):
386 yield (o, i)
387 return
388 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
389 if isinstance(field_shape, Layout):
390 val = ai.fields
391 else:
392 val = ai
393 if hasattr(val, field_name): # check for attribute
394 val = getattr(val, field_name)
395 else:
396 val = val[field_name] # dictionary-style specification
397 yield from self.iterator2(ao.fields[field_name], val)
398
399 def record_iter2(self, ao, ai):
400 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
401 if isinstance(field_shape, Layout):
402 val = ai.fields
403 else:
404 val = ai
405 if hasattr(val, field_name): # check for attribute
406 val = getattr(val, field_name)
407 else:
408 val = val[field_name] # dictionary-style specification
409 yield from self.iterator2(ao.fields[field_name], val)
410
411 def arrayproxy_iter2(self, ao, ai):
412 for p in ai.ports():
413 op = getattr(ao, p.name)
414 print ("arrayproxy - p", p, p.name)
415 yield from self.iterator2(op, p)
416
417
418 class Visitor:
419 """ a helper class for iterating single-argument compound data structures.
420 similar to Visitor2.
421 """
422 def iterate(self, i):
423 """ iterate a compound structure recursively using yield
424 """
425 if not isinstance(i, Sequence):
426 i = [i]
427 for ai in i:
428 print ("iterate", ai)
429 if isinstance(ai, Record):
430 print ("record", list(ai.layout))
431 yield from self.record_iter(ai)
432 elif isinstance(ai, ArrayProxy) and not isinstance(ai, Value):
433 yield from self.array_iter(ai)
434 else:
435 yield ai
436
437 def record_iter(self, ai):
438 for idx, (field_name, field_shape, _) in enumerate(ai.layout):
439 if isinstance(field_shape, Layout):
440 val = ai.fields
441 else:
442 val = ai
443 if hasattr(val, field_name): # check for attribute
444 val = getattr(val, field_name)
445 else:
446 val = val[field_name] # dictionary-style specification
447 print ("recidx", idx, field_name, field_shape, val)
448 yield from self.iterate(val)
449
450 def array_iter(self, ai):
451 for p in ai.ports():
452 yield from self.iterate(p)
453
454
455 def eq(o, i):
456 """ makes signals equal: a helper routine which identifies if it is being
457 passed a list (or tuple) of objects, or signals, or Records, and calls
458 the objects' eq function.
459 """
460 res = []
461 for (ao, ai) in Visitor2().iterator2(o, i):
462 rres = ao.eq(ai)
463 if not isinstance(rres, Sequence):
464 rres = [rres]
465 res += rres
466 return res
467
468
469 def shape(i):
470 print ("shape", i)
471 r = 0
472 for part in list(i):
473 print ("shape?", part)
474 s, _ = part.shape()
475 r += s
476 return r, False
477
478
479 def cat(i):
480 """ flattens a compound structure recursively using Cat
481 """
482 from nmigen.tools import flatten
483 #res = list(flatten(i)) # works (as of nmigen commit f22106e5) HOWEVER...
484 res = list(Visitor().iterate(i)) # needed because input may be a sequence
485 return Cat(*res)
486
487
488 class StageCls(metaclass=ABCMeta):
489 """ Class-based "Stage" API. requires instantiation (after derivation)
490
491 see "Stage API" above.. Note: python does *not* require derivation
492 from this class. All that is required is that the pipelines *have*
493 the functions listed in this class. Derivation from this class
494 is therefore merely a "courtesy" to maintainers.
495 """
496 @abstractmethod
497 def ispec(self): pass # REQUIRED
498 @abstractmethod
499 def ospec(self): pass # REQUIRED
500 #@abstractmethod
501 #def setup(self, m, i): pass # OPTIONAL
502 @abstractmethod
503 def process(self, i): pass # REQUIRED
504
505
506 class Stage(metaclass=ABCMeta):
507 """ Static "Stage" API. does not require instantiation (after derivation)
508
509 see "Stage API" above. Note: python does *not* require derivation
510 from this class. All that is required is that the pipelines *have*
511 the functions listed in this class. Derivation from this class
512 is therefore merely a "courtesy" to maintainers.
513 """
514 @staticmethod
515 @abstractmethod
516 def ispec(): pass
517
518 @staticmethod
519 @abstractmethod
520 def ospec(): pass
521
522 #@staticmethod
523 #@abstractmethod
524 #def setup(m, i): pass
525
526 @staticmethod
527 @abstractmethod
528 def process(i): pass
529
530
531 class RecordBasedStage(Stage):
532 """ convenience class which provides a Records-based layout.
533 honestly it's a lot easier just to create a direct Records-based
534 class (see ExampleAddRecordStage)
535 """
536 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
537 self.in_shape = in_shape
538 self.out_shape = out_shape
539 self.__process = processfn
540 self.__setup = setupfn
541 def ispec(self): return Record(self.in_shape)
542 def ospec(self): return Record(self.out_shape)
543 def process(seif, i): return self.__process(i)
544 def setup(seif, m, i): return self.__setup(m, i)
545
546
547 class StageChain(StageCls):
548 """ pass in a list of stages, and they will automatically be
549 chained together via their input and output specs into a
550 combinatorial chain.
551
552 the end result basically conforms to the exact same Stage API.
553
554 * input to this class will be the input of the first stage
555 * output of first stage goes into input of second
556 * output of second goes into input into third (etc. etc.)
557 * the output of this class will be the output of the last stage
558 """
559 def __init__(self, chain, specallocate=False):
560 self.chain = chain
561 self.specallocate = specallocate
562
563 def ispec(self):
564 return self.chain[0].ispec()
565
566 def ospec(self):
567 return self.chain[-1].ospec()
568
569 def _specallocate_setup(self, m, i):
570 for (idx, c) in enumerate(self.chain):
571 if hasattr(c, "setup"):
572 c.setup(m, i) # stage may have some module stuff
573 o = self.chain[idx].ospec() # last assignment survives
574 m.d.comb += eq(o, c.process(i)) # process input into "o"
575 if idx == len(self.chain)-1:
576 break
577 i = self.chain[idx+1].ispec() # new input on next loop
578 m.d.comb += eq(i, o) # assign to next input
579 return o # last loop is the output
580
581 def _noallocate_setup(self, m, i):
582 for (idx, c) in enumerate(self.chain):
583 if hasattr(c, "setup"):
584 c.setup(m, i) # stage may have some module stuff
585 i = o = c.process(i) # store input into "o"
586 return o # last loop is the output
587
588 def setup(self, m, i):
589 if self.specallocate:
590 self.o = self._specallocate_setup(m, i)
591 else:
592 self.o = self._noallocate_setup(m, i)
593
594 def process(self, i):
595 return self.o # conform to Stage API: return last-loop output
596
597
598 class ControlBase:
599 """ Common functions for Pipeline API
600 """
601 def __init__(self, stage=None, in_multi=None, stage_ctl=False):
602 """ Base class containing ready/valid/data to previous and next stages
603
604 * p: contains ready/valid to the previous stage
605 * n: contains ready/valid to the next stage
606
607 Except when calling Controlbase.connect(), user must also:
608 * add i_data member to PrevControl (p) and
609 * add o_data member to NextControl (n)
610 """
611 self.stage = stage
612
613 # set up input and output IO ACK (prev/next ready/valid)
614 self.p = PrevControl(in_multi, stage_ctl)
615 self.n = NextControl(stage_ctl)
616
617 # set up the input and output data
618 if stage is not None:
619 self.p.i_data = stage.ispec() # input type
620 self.n.o_data = stage.ospec()
621
622 def connect_to_next(self, nxt):
623 """ helper function to connect to the next stage data/valid/ready.
624 """
625 return self.n.connect_to_next(nxt.p)
626
627 def _connect_in(self, prev):
628 """ internal helper function to connect stage to an input source.
629 do not use to connect stage-to-stage!
630 """
631 return self.p._connect_in(prev.p)
632
633 def _connect_out(self, nxt):
634 """ internal helper function to connect stage to an output source.
635 do not use to connect stage-to-stage!
636 """
637 return self.n._connect_out(nxt.n)
638
639 def connect(self, pipechain):
640 """ connects a chain (list) of Pipeline instances together and
641 links them to this ControlBase instance:
642
643 in <----> self <---> out
644 | ^
645 v |
646 [pipe1, pipe2, pipe3, pipe4]
647 | ^ | ^ | ^
648 v | v | v |
649 out---in out--in out---in
650
651 Also takes care of allocating i_data/o_data, by looking up
652 the data spec for each end of the pipechain. i.e It is NOT
653 necessary to allocate self.p.i_data or self.n.o_data manually:
654 this is handled AUTOMATICALLY, here.
655
656 Basically this function is the direct equivalent of StageChain,
657 except that unlike StageChain, the Pipeline logic is followed.
658
659 Just as StageChain presents an object that conforms to the
660 Stage API from a list of objects that also conform to the
661 Stage API, an object that calls this Pipeline connect function
662 has the exact same pipeline API as the list of pipline objects
663 it is called with.
664
665 Thus it becomes possible to build up larger chains recursively.
666 More complex chains (multi-input, multi-output) will have to be
667 done manually.
668 """
669 eqs = [] # collated list of assignment statements
670
671 # connect inter-chain
672 for i in range(len(pipechain)-1):
673 pipe1 = pipechain[i]
674 pipe2 = pipechain[i+1]
675 eqs += pipe1.connect_to_next(pipe2)
676
677 # connect front of chain to ourselves
678 front = pipechain[0]
679 self.p.i_data = front.stage.ispec()
680 eqs += front._connect_in(self)
681
682 # connect end of chain to ourselves
683 end = pipechain[-1]
684 self.n.o_data = end.stage.ospec()
685 eqs += end._connect_out(self)
686
687 return eqs
688
689 def _postprocess(self, i): # XXX DISABLED
690 return i # RETURNS INPUT
691 if hasattr(self.stage, "postprocess"):
692 return self.stage.postprocess(i)
693 return i
694
695 def set_input(self, i):
696 """ helper function to set the input data
697 """
698 return eq(self.p.i_data, i)
699
700 def __iter__(self):
701 yield from self.p
702 yield from self.n
703
704 def ports(self):
705 return list(self)
706
707 def _elaborate(self, platform):
708 """ handles case where stage has dynamic ready/valid functions
709 """
710 m = Module()
711 m.submodules.p = self.p
712 m.submodules.n = self.n
713
714 if self.stage is not None and hasattr(self.stage, "setup"):
715 self.stage.setup(m, self.p.i_data)
716
717 if not self.p.stage_ctl:
718 return m
719
720 # intercept the previous (outgoing) "ready", combine with stage ready
721 m.d.comb += self.p.s_o_ready.eq(self.p._o_ready & self.stage.d_ready)
722
723 # intercept the next (incoming) "ready" and combine it with data valid
724 sdv = self.stage.d_valid(self.n.i_ready)
725 m.d.comb += self.n.d_valid.eq(self.n.i_ready & sdv)
726
727 return m
728
729
730 class BufferedHandshake(ControlBase):
731 """ buffered pipeline stage. data and strobe signals travel in sync.
732 if ever the input is ready and the output is not, processed data
733 is shunted in a temporary register.
734
735 Argument: stage. see Stage API above
736
737 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
738 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
739 stage-1 p.i_data >>in stage n.o_data out>> stage+1
740 | |
741 process --->----^
742 | |
743 +-- r_data ->-+
744
745 input data p.i_data is read (only), is processed and goes into an
746 intermediate result store [process()]. this is updated combinatorially.
747
748 in a non-stall condition, the intermediate result will go into the
749 output (update_output). however if ever there is a stall, it goes
750 into r_data instead [update_buffer()].
751
752 when the non-stall condition is released, r_data is the first
753 to be transferred to the output [flush_buffer()], and the stall
754 condition cleared.
755
756 on the next cycle (as long as stall is not raised again) the
757 input may begin to be processed and transferred directly to output.
758 """
759
760 def elaborate(self, platform):
761 self.m = ControlBase._elaborate(self, platform)
762
763 result = self.stage.ospec()
764 r_data = self.stage.ospec()
765
766 # establish some combinatorial temporaries
767 o_n_validn = Signal(reset_less=True)
768 n_i_ready = Signal(reset_less=True, name="n_i_rdy_data")
769 nir_por = Signal(reset_less=True)
770 nir_por_n = Signal(reset_less=True)
771 p_i_valid = Signal(reset_less=True)
772 nir_novn = Signal(reset_less=True)
773 nirn_novn = Signal(reset_less=True)
774 por_pivn = Signal(reset_less=True)
775 npnn = Signal(reset_less=True)
776 self.m.d.comb += [p_i_valid.eq(self.p.i_valid_test),
777 o_n_validn.eq(~self.n.o_valid),
778 n_i_ready.eq(self.n.i_ready_test),
779 nir_por.eq(n_i_ready & self.p._o_ready),
780 nir_por_n.eq(n_i_ready & ~self.p._o_ready),
781 nir_novn.eq(n_i_ready | o_n_validn),
782 nirn_novn.eq(~n_i_ready & o_n_validn),
783 npnn.eq(nir_por | nirn_novn),
784 por_pivn.eq(self.p._o_ready & ~p_i_valid)
785 ]
786
787 # store result of processing in combinatorial temporary
788 self.m.d.comb += eq(result, self.stage.process(self.p.i_data))
789
790 # if not in stall condition, update the temporary register
791 with self.m.If(self.p.o_ready): # not stalled
792 self.m.d.sync += eq(r_data, result) # update buffer
793
794 # data pass-through conditions
795 with self.m.If(npnn):
796 o_data = self._postprocess(result)
797 self.m.d.sync += [self.n.o_valid.eq(p_i_valid), # valid if p_valid
798 eq(self.n.o_data, o_data), # update output
799 ]
800 # buffer flush conditions (NOTE: can override data passthru conditions)
801 with self.m.If(nir_por_n): # not stalled
802 # Flush the [already processed] buffer to the output port.
803 o_data = self._postprocess(r_data)
804 self.m.d.sync += [self.n.o_valid.eq(1), # reg empty
805 eq(self.n.o_data, o_data), # flush buffer
806 ]
807 # output ready conditions
808 self.m.d.sync += self.p._o_ready.eq(nir_novn | por_pivn)
809
810 return self.m
811
812
813 class SimpleHandshake(ControlBase):
814 """ simple handshake control. data and strobe signals travel in sync.
815 implements the protocol used by Wishbone and AXI4.
816
817 Argument: stage. see Stage API above
818
819 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
820 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
821 stage-1 p.i_data >>in stage n.o_data out>> stage+1
822 | |
823 +--process->--^
824 Truth Table
825
826 Inputs Temporary Output Data
827 ------- ---------- ----- ----
828 P P N N PiV& ~NiR& N P
829 i o i o PoR NoV o o
830 V R R V V R
831
832 ------- - - - -
833 0 0 0 0 0 0 >0 0 reg
834 0 0 0 1 0 1 >1 0 reg
835 0 0 1 0 0 0 0 1 process(i_data)
836 0 0 1 1 0 0 0 1 process(i_data)
837 ------- - - - -
838 0 1 0 0 0 0 >0 0 reg
839 0 1 0 1 0 1 >1 0 reg
840 0 1 1 0 0 0 0 1 process(i_data)
841 0 1 1 1 0 0 0 1 process(i_data)
842 ------- - - - -
843 1 0 0 0 0 0 >0 0 reg
844 1 0 0 1 0 1 >1 0 reg
845 1 0 1 0 0 0 0 1 process(i_data)
846 1 0 1 1 0 0 0 1 process(i_data)
847 ------- - - - -
848 1 1 0 0 1 0 1 0 process(i_data)
849 1 1 0 1 1 1 1 0 process(i_data)
850 1 1 1 0 1 0 1 1 process(i_data)
851 1 1 1 1 1 0 1 1 process(i_data)
852 ------- - - - -
853 """
854
855 def elaborate(self, platform):
856 self.m = m = ControlBase._elaborate(self, platform)
857
858 r_busy = Signal()
859 result = self.stage.ospec()
860
861 # establish some combinatorial temporaries
862 n_i_ready = Signal(reset_less=True, name="n_i_rdy_data")
863 p_i_valid_p_o_ready = Signal(reset_less=True)
864 p_i_valid = Signal(reset_less=True)
865 m.d.comb += [p_i_valid.eq(self.p.i_valid_test),
866 n_i_ready.eq(self.n.i_ready_test),
867 p_i_valid_p_o_ready.eq(p_i_valid & self.p.o_ready),
868 ]
869
870 # store result of processing in combinatorial temporary
871 m.d.comb += eq(result, self.stage.process(self.p.i_data))
872
873 # previous valid and ready
874 with m.If(p_i_valid_p_o_ready):
875 o_data = self._postprocess(result)
876 m.d.sync += [r_busy.eq(1), # output valid
877 eq(self.n.o_data, o_data), # update output
878 ]
879 # previous invalid or not ready, however next is accepting
880 with m.Elif(n_i_ready):
881 o_data = self._postprocess(result)
882 m.d.sync += [eq(self.n.o_data, o_data)]
883 # TODO: could still send data here (if there was any)
884 #m.d.sync += self.n.o_valid.eq(0) # ...so set output invalid
885 m.d.sync += r_busy.eq(0) # ...so set output invalid
886
887 m.d.comb += self.n.o_valid.eq(r_busy)
888 # if next is ready, so is previous
889 m.d.comb += self.p._o_ready.eq(n_i_ready)
890
891 return self.m
892
893
894 class UnbufferedPipeline(ControlBase):
895 """ A simple pipeline stage with single-clock synchronisation
896 and two-way valid/ready synchronised signalling.
897
898 Note that a stall in one stage will result in the entire pipeline
899 chain stalling.
900
901 Also that unlike BufferedHandshake, the valid/ready signalling does NOT
902 travel synchronously with the data: the valid/ready signalling
903 combines in a *combinatorial* fashion. Therefore, a long pipeline
904 chain will lengthen propagation delays.
905
906 Argument: stage. see Stage API, above
907
908 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
909 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
910 stage-1 p.i_data >>in stage n.o_data out>> stage+1
911 | |
912 r_data result
913 | |
914 +--process ->-+
915
916 Attributes:
917 -----------
918 p.i_data : StageInput, shaped according to ispec
919 The pipeline input
920 p.o_data : StageOutput, shaped according to ospec
921 The pipeline output
922 r_data : input_shape according to ispec
923 A temporary (buffered) copy of a prior (valid) input.
924 This is HELD if the output is not ready. It is updated
925 SYNCHRONOUSLY.
926 result: output_shape according to ospec
927 The output of the combinatorial logic. it is updated
928 COMBINATORIALLY (no clock dependence).
929
930 Truth Table
931
932 Inputs Temp Output Data
933 ------- - ----- ----
934 P P N N ~NiR& N P
935 i o i o NoV o o
936 V R R V V R
937
938 ------- - - -
939 0 0 0 0 0 0 1 reg
940 0 0 0 1 1 1 0 reg
941 0 0 1 0 0 0 1 reg
942 0 0 1 1 0 0 1 reg
943 ------- - - -
944 0 1 0 0 0 0 1 reg
945 0 1 0 1 1 1 0 reg
946 0 1 1 0 0 0 1 reg
947 0 1 1 1 0 0 1 reg
948 ------- - - -
949 1 0 0 0 0 1 1 reg
950 1 0 0 1 1 1 0 reg
951 1 0 1 0 0 1 1 reg
952 1 0 1 1 0 1 1 reg
953 ------- - - -
954 1 1 0 0 0 1 1 process(i_data)
955 1 1 0 1 1 1 0 process(i_data)
956 1 1 1 0 0 1 1 process(i_data)
957 1 1 1 1 0 1 1 process(i_data)
958 ------- - - -
959
960 Note: PoR is *NOT* involved in the above decision-making.
961 """
962
963 def elaborate(self, platform):
964 self.m = m = ControlBase._elaborate(self, platform)
965
966 data_valid = Signal() # is data valid or not
967 r_data = self.stage.ospec() # output type
968
969 # some temporaries
970 p_i_valid = Signal(reset_less=True)
971 pv = Signal(reset_less=True)
972 buf_full = Signal(reset_less=True)
973 m.d.comb += p_i_valid.eq(self.p.i_valid_test)
974 m.d.comb += pv.eq(self.p.i_valid & self.p.o_ready)
975 m.d.comb += buf_full.eq(~self.n.i_ready_test & data_valid)
976
977 m.d.comb += self.n.o_valid.eq(data_valid)
978 m.d.comb += self.p._o_ready.eq(~data_valid | self.n.i_ready_test)
979 m.d.sync += data_valid.eq(p_i_valid | buf_full)
980
981 with m.If(pv):
982 m.d.sync += eq(r_data, self.stage.process(self.p.i_data))
983 o_data = self._postprocess(r_data)
984 m.d.comb += eq(self.n.o_data, o_data)
985
986 return self.m
987
988 class UnbufferedPipeline2(ControlBase):
989 """ A simple pipeline stage with single-clock synchronisation
990 and two-way valid/ready synchronised signalling.
991
992 Note that a stall in one stage will result in the entire pipeline
993 chain stalling.
994
995 Also that unlike BufferedHandshake, the valid/ready signalling does NOT
996 travel synchronously with the data: the valid/ready signalling
997 combines in a *combinatorial* fashion. Therefore, a long pipeline
998 chain will lengthen propagation delays.
999
1000 Argument: stage. see Stage API, above
1001
1002 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
1003 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
1004 stage-1 p.i_data >>in stage n.o_data out>> stage+1
1005 | | |
1006 +- process-> buf <-+
1007 Attributes:
1008 -----------
1009 p.i_data : StageInput, shaped according to ispec
1010 The pipeline input
1011 p.o_data : StageOutput, shaped according to ospec
1012 The pipeline output
1013 buf : output_shape according to ospec
1014 A temporary (buffered) copy of a valid output
1015 This is HELD if the output is not ready. It is updated
1016 SYNCHRONOUSLY.
1017
1018 Inputs Temp Output Data
1019 ------- - -----
1020 P P N N ~NiR& N P (buf_full)
1021 i o i o NoV o o
1022 V R R V V R
1023
1024 ------- - - -
1025 0 0 0 0 0 0 1 process(i_data)
1026 0 0 0 1 1 1 0 reg (odata, unchanged)
1027 0 0 1 0 0 0 1 process(i_data)
1028 0 0 1 1 0 0 1 process(i_data)
1029 ------- - - -
1030 0 1 0 0 0 0 1 process(i_data)
1031 0 1 0 1 1 1 0 reg (odata, unchanged)
1032 0 1 1 0 0 0 1 process(i_data)
1033 0 1 1 1 0 0 1 process(i_data)
1034 ------- - - -
1035 1 0 0 0 0 1 1 process(i_data)
1036 1 0 0 1 1 1 0 reg (odata, unchanged)
1037 1 0 1 0 0 1 1 process(i_data)
1038 1 0 1 1 0 1 1 process(i_data)
1039 ------- - - -
1040 1 1 0 0 0 1 1 process(i_data)
1041 1 1 0 1 1 1 0 reg (odata, unchanged)
1042 1 1 1 0 0 1 1 process(i_data)
1043 1 1 1 1 0 1 1 process(i_data)
1044 ------- - - -
1045
1046 Note: PoR is *NOT* involved in the above decision-making.
1047 """
1048
1049 def elaborate(self, platform):
1050 self.m = m = ControlBase._elaborate(self, platform)
1051
1052 buf_full = Signal() # is data valid or not
1053 buf = self.stage.ospec() # output type
1054
1055 # some temporaries
1056 p_i_valid = Signal(reset_less=True)
1057 m.d.comb += p_i_valid.eq(self.p.i_valid_test)
1058
1059 m.d.comb += self.n.o_valid.eq(buf_full | p_i_valid)
1060 m.d.comb += self.p._o_ready.eq(~buf_full)
1061 m.d.sync += buf_full.eq(~self.n.i_ready_test & self.n.o_valid)
1062
1063 o_data = Mux(buf_full, buf, self.stage.process(self.p.i_data))
1064 o_data = self._postprocess(o_data)
1065 m.d.comb += eq(self.n.o_data, o_data)
1066 m.d.sync += eq(buf, self.n.o_data)
1067
1068 return self.m
1069
1070
1071 class PassThroughStage(StageCls):
1072 """ a pass-through stage which has its input data spec equal to its output,
1073 and "passes through" its data from input to output.
1074 """
1075 def __init__(self, iospecfn):
1076 self.iospecfn = iospecfn
1077 def ispec(self): return self.iospecfn()
1078 def ospec(self): return self.iospecfn()
1079 def process(self, i): return i
1080
1081
1082 class PassThroughHandshake(ControlBase):
1083 """ A control block that delays by one clock cycle.
1084
1085 Inputs Temporary Output Data
1086 ------- ------------------ ----- ----
1087 P P N N PiV& PiV| NiR| pvr N P (pvr)
1088 i o i o PoR ~PoR ~NoV o o
1089 V R R V V R
1090
1091 ------- - - - - - -
1092 0 0 0 0 0 1 1 0 1 1 odata (unchanged)
1093 0 0 0 1 0 1 0 0 1 0 odata (unchanged)
1094 0 0 1 0 0 1 1 0 1 1 odata (unchanged)
1095 0 0 1 1 0 1 1 0 1 1 odata (unchanged)
1096 ------- - - - - - -
1097 0 1 0 0 0 0 1 0 0 1 odata (unchanged)
1098 0 1 0 1 0 0 0 0 0 0 odata (unchanged)
1099 0 1 1 0 0 0 1 0 0 1 odata (unchanged)
1100 0 1 1 1 0 0 1 0 0 1 odata (unchanged)
1101 ------- - - - - - -
1102 1 0 0 0 0 1 1 1 1 1 process(in)
1103 1 0 0 1 0 1 0 0 1 0 odata (unchanged)
1104 1 0 1 0 0 1 1 1 1 1 process(in)
1105 1 0 1 1 0 1 1 1 1 1 process(in)
1106 ------- - - - - - -
1107 1 1 0 0 1 1 1 1 1 1 process(in)
1108 1 1 0 1 1 1 0 0 1 0 odata (unchanged)
1109 1 1 1 0 1 1 1 1 1 1 process(in)
1110 1 1 1 1 1 1 1 1 1 1 process(in)
1111 ------- - - - - - -
1112
1113 """
1114
1115 def elaborate(self, platform):
1116 self.m = m = ControlBase._elaborate(self, platform)
1117
1118 r_data = self.stage.ospec() # output type
1119
1120 # temporaries
1121 p_i_valid = Signal(reset_less=True)
1122 pvr = Signal(reset_less=True)
1123 m.d.comb += p_i_valid.eq(self.p.i_valid_test)
1124 m.d.comb += pvr.eq(p_i_valid & self.p.o_ready)
1125
1126 m.d.comb += self.p.o_ready.eq(~self.n.o_valid | self.n.i_ready_test)
1127 m.d.sync += self.n.o_valid.eq(p_i_valid | ~self.p.o_ready)
1128
1129 odata = Mux(pvr, self.stage.process(self.p.i_data), r_data)
1130 m.d.sync += eq(r_data, odata)
1131 r_data = self._postprocess(r_data)
1132 m.d.comb += eq(self.n.o_data, r_data)
1133
1134 return m
1135
1136
1137 class RegisterPipeline(UnbufferedPipeline):
1138 """ A pipeline stage that delays by one clock cycle, creating a
1139 sync'd latch out of o_data and o_valid as an indirect byproduct
1140 of using PassThroughStage
1141 """
1142 def __init__(self, iospecfn):
1143 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
1144
1145
1146 class FIFOControl(ControlBase):
1147 """ FIFO Control. Uses SyncFIFO to store data, coincidentally
1148 happens to have same valid/ready signalling as Stage API.
1149
1150 i_data -> fifo.din -> FIFO -> fifo.dout -> o_data
1151 """
1152
1153 def __init__(self, depth, stage, in_multi=None, stage_ctl=False,
1154 fwft=True, buffered=False, pipe=False):
1155 """ FIFO Control
1156
1157 * depth: number of entries in the FIFO
1158 * stage: data processing block
1159 * fwft : first word fall-thru mode (non-fwft introduces delay)
1160 * buffered: use buffered FIFO (introduces extra cycle delay)
1161
1162 NOTE 1: FPGAs may have trouble with the defaults for SyncFIFO
1163 (fwft=True, buffered=False)
1164
1165 NOTE 2: i_data *must* have a shape function. it can therefore
1166 be a Signal, or a Record, or a RecordObject.
1167
1168 data is processed (and located) as follows:
1169
1170 self.p self.stage temp fn temp fn temp fp self.n
1171 i_data->process()->result->cat->din.FIFO.dout->cat(o_data)
1172
1173 yes, really: cat produces a Cat() which can be assigned to.
1174 this is how the FIFO gets de-catted without needing a de-cat
1175 function
1176 """
1177
1178 assert not (fwft and buffered), "buffered cannot do fwft"
1179 if buffered:
1180 depth += 1
1181 self.fwft = fwft
1182 self.buffered = buffered
1183 self.pipe = pipe
1184 self.fdepth = depth
1185 ControlBase.__init__(self, stage, in_multi, stage_ctl)
1186
1187 def elaborate(self, platform):
1188 self.m = m = ControlBase._elaborate(self, platform)
1189
1190 # make a FIFO with a signal of equal width to the o_data.
1191 (fwidth, _) = shape(self.n.o_data)
1192 if self.buffered:
1193 fifo = SyncFIFOBuffered(fwidth, self.fdepth)
1194 else:
1195 fifo = Queue(fwidth, self.fdepth, fwft=self.fwft, pipe=self.pipe)
1196 m.submodules.fifo = fifo
1197
1198 # store result of processing in combinatorial temporary
1199 result = self.stage.ospec()
1200 m.d.comb += eq(result, self.stage.process(self.p.i_data))
1201
1202 # connect previous rdy/valid/data - do cat on i_data
1203 # NOTE: cannot do the PrevControl-looking trick because
1204 # of need to process the data. shaaaame....
1205 m.d.comb += [fifo.we.eq(self.p.i_valid_test),
1206 self.p.o_ready.eq(fifo.writable),
1207 eq(fifo.din, cat(result)),
1208 ]
1209
1210 # connect next rdy/valid/data - do cat on o_data
1211 connections = [self.n.o_valid.eq(fifo.readable),
1212 fifo.re.eq(self.n.i_ready_test),
1213 ]
1214 if self.fwft or self.buffered:
1215 m.d.comb += connections
1216 else:
1217 m.d.sync += connections # unbuffered fwft mode needs sync
1218 o_data = cat(self.n.o_data).eq(fifo.dout)
1219 o_data = self._postprocess(o_data)
1220 m.d.comb += o_data
1221
1222 return m
1223
1224
1225 # aka "RegStage".
1226 class UnbufferedPipeline(FIFOControl):
1227 def __init__(self, stage, in_multi=None, stage_ctl=False):
1228 FIFOControl.__init__(self, 1, stage, in_multi, stage_ctl,
1229 fwft=True, pipe=False)
1230
1231 # aka "BreakReadyStage" XXX had to set fwft=True to get it to work
1232 class PassThroughHandshake(FIFOControl):
1233 def __init__(self, stage, in_multi=None, stage_ctl=False):
1234 FIFOControl.__init__(self, 1, stage, in_multi, stage_ctl,
1235 fwft=True, pipe=True)
1236
1237 # this is *probably* BufferedHandshake, although test #997 now succeeds.
1238 class BufferedHandshake(FIFOControl):
1239 def __init__(self, stage, in_multi=None, stage_ctl=False):
1240 FIFOControl.__init__(self, 2, stage, in_multi, stage_ctl,
1241 fwft=True, pipe=False)
1242
1243
1244 # this is *probably* SimpleHandshake (note: memory cell size=0)
1245 class SimpleHandshake(FIFOControl):
1246 def __init__(self, stage, in_multi=None, stage_ctl=False):
1247 FIFOControl.__init__(self, 0, stage, in_multi, stage_ctl,
1248 fwft=True, pipe=False)