add post-processing optional capability
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedHandshake implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 eq:
5 --
6
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
10 Records.
11
12 Stage API:
13 ---------
14
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
18
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
23 sub-objects
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
29 function
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
36
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
41
42 StageChain:
43 ----------
44
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
48
49 RecordBasedStage:
50 ----------------
51
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
56 examples).
57
58 PassThroughStage:
59 ----------------
60
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
64
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
68
69 ControlBase:
70 -----------
71
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
75 ready/valid/data API.
76
77 UnbufferedPipeline:
78 ------------------
79
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedHandshake). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
84
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedHandshake by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
88 also having to stall.
89
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
93
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
97
98 PassThroughHandshake:
99 ------------------
100
101 A Control class that introduces a single clock delay, passing its
102 data through unaltered. Unlike RegisterPipeline (which relies
103 on UnbufferedPipeline and PassThroughStage) it handles ready/valid
104 itself.
105
106 RegisterPipeline:
107 ----------------
108
109 A convenience class that, because UnbufferedPipeline introduces a single
110 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
111 stage that, duh, delays its (unmodified) input by one clock cycle.
112
113 BufferedHandshake:
114 ----------------
115
116 nmigen implementation of buffered pipeline stage, based on zipcpu:
117 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
118
119 this module requires quite a bit of thought to understand how it works
120 (and why it is needed in the first place). reading the above is
121 *strongly* recommended.
122
123 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
124 the STB / ACK signals to raise and lower (on separate clocks) before
125 data may proceeed (thus only allowing one piece of data to proceed
126 on *ALTERNATE* cycles), the signalling here is a true pipeline
127 where data will flow on *every* clock when the conditions are right.
128
129 input acceptance conditions are when:
130 * incoming previous-stage strobe (p.i_valid) is HIGH
131 * outgoing previous-stage ready (p.o_ready) is LOW
132
133 output transmission conditions are when:
134 * outgoing next-stage strobe (n.o_valid) is HIGH
135 * outgoing next-stage ready (n.i_ready) is LOW
136
137 the tricky bit is when the input has valid data and the output is not
138 ready to accept it. if it wasn't for the clock synchronisation, it
139 would be possible to tell the input "hey don't send that data, we're
140 not ready". unfortunately, it's not possible to "change the past":
141 the previous stage *has no choice* but to pass on its data.
142
143 therefore, the incoming data *must* be accepted - and stored: that
144 is the responsibility / contract that this stage *must* accept.
145 on the same clock, it's possible to tell the input that it must
146 not send any more data. this is the "stall" condition.
147
148 we now effectively have *two* possible pieces of data to "choose" from:
149 the buffered data, and the incoming data. the decision as to which
150 to process and output is based on whether we are in "stall" or not.
151 i.e. when the next stage is no longer ready, the output comes from
152 the buffer if a stall had previously occurred, otherwise it comes
153 direct from processing the input.
154
155 this allows us to respect a synchronous "travelling STB" with what
156 dan calls a "buffered handshake".
157
158 it's quite a complex state machine!
159
160 SimpleHandshake
161 ---------------
162
163 Synchronised pipeline, Based on:
164 https://github.com/ZipCPU/dbgbus/blob/master/hexbus/rtl/hbdeword.v
165 """
166
167 from nmigen import Signal, Cat, Const, Mux, Module, Value
168 from nmigen.cli import verilog, rtlil
169 from nmigen.lib.fifo import SyncFIFO, SyncFIFOBuffered
170 from nmigen.hdl.ast import ArrayProxy
171 from nmigen.hdl.rec import Record, Layout
172
173 from abc import ABCMeta, abstractmethod
174 from collections.abc import Sequence
175
176
177 class RecordObject(Record):
178 def __init__(self, layout=None, name=None):
179 Record.__init__(self, layout=layout or [], name=None)
180
181 def __setattr__(self, k, v):
182 if k in dir(Record) or "fields" not in self.__dict__:
183 return object.__setattr__(self, k, v)
184 self.fields[k] = v
185 if isinstance(v, Record):
186 newlayout = {k: (k, v.layout)}
187 else:
188 newlayout = {k: (k, v.shape())}
189 self.layout.fields.update(newlayout)
190
191
192
193 class PrevControl:
194 """ contains signals that come *from* the previous stage (both in and out)
195 * i_valid: previous stage indicating all incoming data is valid.
196 may be a multi-bit signal, where all bits are required
197 to be asserted to indicate "valid".
198 * o_ready: output to next stage indicating readiness to accept data
199 * i_data : an input - added by the user of this class
200 """
201
202 def __init__(self, i_width=1, stage_ctl=False):
203 self.stage_ctl = stage_ctl
204 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
205 self._o_ready = Signal(name="p_o_ready") # prev <<out self
206 self.i_data = None # XXX MUST BE ADDED BY USER
207 if stage_ctl:
208 self.s_o_ready = Signal(name="p_s_o_rdy") # prev <<out self
209
210 @property
211 def o_ready(self):
212 """ public-facing API: indicates (externally) that stage is ready
213 """
214 if self.stage_ctl:
215 return self.s_o_ready # set dynamically by stage
216 return self._o_ready # return this when not under dynamic control
217
218 def _connect_in(self, prev, direct=False, fn=None):
219 """ internal helper function to connect stage to an input source.
220 do not use to connect stage-to-stage!
221 """
222 i_valid = prev.i_valid if direct else prev.i_valid_test
223 i_data = fn(prev.i_data) if fn is not None else prev.i_data
224 return [self.i_valid.eq(i_valid),
225 prev.o_ready.eq(self.o_ready),
226 eq(self.i_data, i_data),
227 ]
228
229 @property
230 def i_valid_test(self):
231 vlen = len(self.i_valid)
232 if vlen > 1:
233 # multi-bit case: valid only when i_valid is all 1s
234 all1s = Const(-1, (len(self.i_valid), False))
235 i_valid = (self.i_valid == all1s)
236 else:
237 # single-bit i_valid case
238 i_valid = self.i_valid
239
240 # when stage indicates not ready, incoming data
241 # must "appear" to be not ready too
242 if self.stage_ctl:
243 i_valid = i_valid & self.s_o_ready
244
245 return i_valid
246
247
248 class NextControl:
249 """ contains the signals that go *to* the next stage (both in and out)
250 * o_valid: output indicating to next stage that data is valid
251 * i_ready: input from next stage indicating that it can accept data
252 * o_data : an output - added by the user of this class
253 """
254 def __init__(self, stage_ctl=False):
255 self.stage_ctl = stage_ctl
256 self.o_valid = Signal(name="n_o_valid") # self out>> next
257 self.i_ready = Signal(name="n_i_ready") # self <<in next
258 self.o_data = None # XXX MUST BE ADDED BY USER
259 #if self.stage_ctl:
260 self.d_valid = Signal(reset=1) # INTERNAL (data valid)
261
262 @property
263 def i_ready_test(self):
264 if self.stage_ctl:
265 return self.i_ready & self.d_valid
266 return self.i_ready
267
268 def connect_to_next(self, nxt):
269 """ helper function to connect to the next stage data/valid/ready.
270 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
271 use this when connecting stage-to-stage
272 """
273 return [nxt.i_valid.eq(self.o_valid),
274 self.i_ready.eq(nxt.o_ready),
275 eq(nxt.i_data, self.o_data),
276 ]
277
278 def _connect_out(self, nxt, direct=False, fn=None):
279 """ internal helper function to connect stage to an output source.
280 do not use to connect stage-to-stage!
281 """
282 i_ready = nxt.i_ready if direct else nxt.i_ready_test
283 o_data = fn(nxt.o_data) if fn is not None else nxt.o_data
284 return [nxt.o_valid.eq(self.o_valid),
285 self.i_ready.eq(i_ready),
286 eq(o_data, self.o_data),
287 ]
288
289
290 class Visitor:
291 """ a helper routine which identifies if it is being passed a list
292 (or tuple) of objects, or signals, or Records, and calls
293 a visitor function.
294
295 the visiting fn is called when an object is identified.
296
297 Record is a special (unusual, recursive) case, where the input may be
298 specified as a dictionary (which may contain further dictionaries,
299 recursively), where the field names of the dictionary must match
300 the Record's field spec. Alternatively, an object with the same
301 member names as the Record may be assigned: it does not have to
302 *be* a Record.
303
304 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
305 has an eq function, the object being assigned to it (e.g. a python
306 object) might not. despite the *input* having an eq function,
307 that doesn't help us, because it's the *ArrayProxy* that's being
308 assigned to. so.... we cheat. use the ports() function of the
309 python object, enumerate them, find out the list of Signals that way,
310 and assign them.
311 """
312 def visit(self, o, i, act):
313 if isinstance(o, dict):
314 return self.dict_visit(o, i, act)
315
316 res = act.prepare()
317 if not isinstance(o, Sequence):
318 o, i = [o], [i]
319 for (ao, ai) in zip(o, i):
320 #print ("visit", fn, ao, ai)
321 if isinstance(ao, Record):
322 rres = self.record_visit(ao, ai, act)
323 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
324 rres = self.arrayproxy_visit(ao, ai, act)
325 else:
326 rres = act.fn(ao, ai)
327 res += rres
328 return res
329
330 def dict_visit(self, o, i, act):
331 res = act.prepare()
332 for (k, v) in o.items():
333 print ("d-eq", v, i[k])
334 res.append(act.fn(v, i[k]))
335 return res
336
337 def record_visit(self, ao, ai, act):
338 res = act.prepare()
339 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
340 if isinstance(field_shape, Layout):
341 val = ai.fields
342 else:
343 val = ai
344 if hasattr(val, field_name): # check for attribute
345 val = getattr(val, field_name)
346 else:
347 val = val[field_name] # dictionary-style specification
348 val = self.visit(ao.fields[field_name], val, act)
349 if isinstance(val, Sequence):
350 res += val
351 else:
352 res.append(val)
353 return res
354
355 def arrayproxy_visit(self, ao, ai, act):
356 res = act.prepare()
357 for p in ai.ports():
358 op = getattr(ao, p.name)
359 #print (op, p, p.name)
360 res.append(fn(op, p))
361 return res
362
363
364 class Eq(Visitor):
365 def __init__(self):
366 self.res = []
367 def prepare(self):
368 return []
369 def fn(self, o, i):
370 rres = o.eq(i)
371 if not isinstance(rres, Sequence):
372 rres = [rres]
373 return rres
374 def __call__(self, o, i):
375 return self.visit(o, i, self)
376
377
378 def eq(o, i):
379 """ makes signals equal: a helper routine which identifies if it is being
380 passed a list (or tuple) of objects, or signals, or Records, and calls
381 the objects' eq function.
382 """
383 return Eq()(o, i)
384
385
386 def flatten(i):
387 """ flattens a compound structure recursively using Cat
388 """
389 if not isinstance(i, Sequence):
390 i = [i]
391 res = []
392 for ai in i:
393 print ("flatten", ai)
394 if isinstance(ai, Record):
395 print ("record", list(ai.layout))
396 rres = []
397 for idx, (field_name, field_shape, _) in enumerate(ai.layout):
398 if isinstance(field_shape, Layout):
399 val = ai.fields
400 else:
401 val = ai
402 if hasattr(val, field_name): # check for attribute
403 val = getattr(val, field_name)
404 else:
405 val = val[field_name] # dictionary-style specification
406 print ("recidx", idx, field_name, field_shape, val)
407 val = flatten(val)
408 print ("recidx flat", idx, val)
409 if isinstance(val, Sequence):
410 rres += val
411 else:
412 rres.append(val)
413
414 elif isinstance(ai, ArrayProxy) and not isinstance(ai, Value):
415 rres = []
416 for p in ai.ports():
417 op = getattr(ai, p.name)
418 #print (op, p, p.name)
419 rres.append(flatten(p))
420 else:
421 rres = ai
422 if not isinstance(rres, Sequence):
423 rres = [rres]
424 res += rres
425 print ("flatten res", res)
426 return Cat(*res)
427
428
429
430 class StageCls(metaclass=ABCMeta):
431 """ Class-based "Stage" API. requires instantiation (after derivation)
432
433 see "Stage API" above.. Note: python does *not* require derivation
434 from this class. All that is required is that the pipelines *have*
435 the functions listed in this class. Derivation from this class
436 is therefore merely a "courtesy" to maintainers.
437 """
438 @abstractmethod
439 def ispec(self): pass # REQUIRED
440 @abstractmethod
441 def ospec(self): pass # REQUIRED
442 #@abstractmethod
443 #def setup(self, m, i): pass # OPTIONAL
444 @abstractmethod
445 def process(self, i): pass # REQUIRED
446
447
448 class Stage(metaclass=ABCMeta):
449 """ Static "Stage" API. does not require instantiation (after derivation)
450
451 see "Stage API" above. Note: python does *not* require derivation
452 from this class. All that is required is that the pipelines *have*
453 the functions listed in this class. Derivation from this class
454 is therefore merely a "courtesy" to maintainers.
455 """
456 @staticmethod
457 @abstractmethod
458 def ispec(): pass
459
460 @staticmethod
461 @abstractmethod
462 def ospec(): pass
463
464 #@staticmethod
465 #@abstractmethod
466 #def setup(m, i): pass
467
468 @staticmethod
469 @abstractmethod
470 def process(i): pass
471
472
473 class RecordBasedStage(Stage):
474 """ convenience class which provides a Records-based layout.
475 honestly it's a lot easier just to create a direct Records-based
476 class (see ExampleAddRecordStage)
477 """
478 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
479 self.in_shape = in_shape
480 self.out_shape = out_shape
481 self.__process = processfn
482 self.__setup = setupfn
483 def ispec(self): return Record(self.in_shape)
484 def ospec(self): return Record(self.out_shape)
485 def process(seif, i): return self.__process(i)
486 def setup(seif, m, i): return self.__setup(m, i)
487
488
489 class StageChain(StageCls):
490 """ pass in a list of stages, and they will automatically be
491 chained together via their input and output specs into a
492 combinatorial chain.
493
494 the end result basically conforms to the exact same Stage API.
495
496 * input to this class will be the input of the first stage
497 * output of first stage goes into input of second
498 * output of second goes into input into third (etc. etc.)
499 * the output of this class will be the output of the last stage
500 """
501 def __init__(self, chain, specallocate=False):
502 self.chain = chain
503 self.specallocate = specallocate
504
505 def ispec(self):
506 return self.chain[0].ispec()
507
508 def ospec(self):
509 return self.chain[-1].ospec()
510
511 def _specallocate_setup(self, m, i):
512 for (idx, c) in enumerate(self.chain):
513 if hasattr(c, "setup"):
514 c.setup(m, i) # stage may have some module stuff
515 o = self.chain[idx].ospec() # last assignment survives
516 m.d.comb += eq(o, c.process(i)) # process input into "o"
517 if idx == len(self.chain)-1:
518 break
519 i = self.chain[idx+1].ispec() # new input on next loop
520 m.d.comb += eq(i, o) # assign to next input
521 return o # last loop is the output
522
523 def _noallocate_setup(self, m, i):
524 for (idx, c) in enumerate(self.chain):
525 if hasattr(c, "setup"):
526 c.setup(m, i) # stage may have some module stuff
527 i = o = c.process(i) # store input into "o"
528 return o # last loop is the output
529
530 def setup(self, m, i):
531 if self.specallocate:
532 self.o = self._specallocate_setup(m, i)
533 else:
534 self.o = self._noallocate_setup(m, i)
535
536 def process(self, i):
537 return self.o # conform to Stage API: return last-loop output
538
539
540 class ControlBase:
541 """ Common functions for Pipeline API
542 """
543 def __init__(self, stage=None, in_multi=None, stage_ctl=False):
544 """ Base class containing ready/valid/data to previous and next stages
545
546 * p: contains ready/valid to the previous stage
547 * n: contains ready/valid to the next stage
548
549 Except when calling Controlbase.connect(), user must also:
550 * add i_data member to PrevControl (p) and
551 * add o_data member to NextControl (n)
552 """
553 self.stage = stage
554
555 # set up input and output IO ACK (prev/next ready/valid)
556 self.p = PrevControl(in_multi, stage_ctl)
557 self.n = NextControl(stage_ctl)
558
559 # set up the input and output data
560 if stage is not None:
561 self.p.i_data = stage.ispec() # input type
562 self.n.o_data = stage.ospec()
563
564 def connect_to_next(self, nxt):
565 """ helper function to connect to the next stage data/valid/ready.
566 """
567 return self.n.connect_to_next(nxt.p)
568
569 def _connect_in(self, prev):
570 """ internal helper function to connect stage to an input source.
571 do not use to connect stage-to-stage!
572 """
573 return self.p._connect_in(prev.p)
574
575 def _connect_out(self, nxt):
576 """ internal helper function to connect stage to an output source.
577 do not use to connect stage-to-stage!
578 """
579 return self.n._connect_out(nxt.n)
580
581 def connect(self, pipechain):
582 """ connects a chain (list) of Pipeline instances together and
583 links them to this ControlBase instance:
584
585 in <----> self <---> out
586 | ^
587 v |
588 [pipe1, pipe2, pipe3, pipe4]
589 | ^ | ^ | ^
590 v | v | v |
591 out---in out--in out---in
592
593 Also takes care of allocating i_data/o_data, by looking up
594 the data spec for each end of the pipechain. i.e It is NOT
595 necessary to allocate self.p.i_data or self.n.o_data manually:
596 this is handled AUTOMATICALLY, here.
597
598 Basically this function is the direct equivalent of StageChain,
599 except that unlike StageChain, the Pipeline logic is followed.
600
601 Just as StageChain presents an object that conforms to the
602 Stage API from a list of objects that also conform to the
603 Stage API, an object that calls this Pipeline connect function
604 has the exact same pipeline API as the list of pipline objects
605 it is called with.
606
607 Thus it becomes possible to build up larger chains recursively.
608 More complex chains (multi-input, multi-output) will have to be
609 done manually.
610 """
611 eqs = [] # collated list of assignment statements
612
613 # connect inter-chain
614 for i in range(len(pipechain)-1):
615 pipe1 = pipechain[i]
616 pipe2 = pipechain[i+1]
617 eqs += pipe1.connect_to_next(pipe2)
618
619 # connect front of chain to ourselves
620 front = pipechain[0]
621 self.p.i_data = front.stage.ispec()
622 eqs += front._connect_in(self)
623
624 # connect end of chain to ourselves
625 end = pipechain[-1]
626 self.n.o_data = end.stage.ospec()
627 eqs += end._connect_out(self)
628
629 return eqs
630
631 def _postprocess(self, i):
632 if hasattr(self.stage, "postprocess"):
633 return self.stage.postprocess(i)
634 return i
635
636 def set_input(self, i):
637 """ helper function to set the input data
638 """
639 return eq(self.p.i_data, i)
640
641 def ports(self):
642 res = [self.p.i_valid, self.n.i_ready,
643 self.n.o_valid, self.p.o_ready,
644 ]
645 if hasattr(self.p.i_data, "ports"):
646 res += self.p.i_data.ports()
647 else:
648 res += self.p.i_data
649 if hasattr(self.n.o_data, "ports"):
650 res += self.n.o_data.ports()
651 else:
652 res += self.n.o_data
653 return res
654
655 def _elaborate(self, platform):
656 """ handles case where stage has dynamic ready/valid functions
657 """
658 m = Module()
659
660 if self.stage is not None and hasattr(self.stage, "setup"):
661 self.stage.setup(m, self.p.i_data)
662
663 if not self.p.stage_ctl:
664 return m
665
666 # intercept the previous (outgoing) "ready", combine with stage ready
667 m.d.comb += self.p.s_o_ready.eq(self.p._o_ready & self.stage.d_ready)
668
669 # intercept the next (incoming) "ready" and combine it with data valid
670 sdv = self.stage.d_valid(self.n.i_ready)
671 m.d.comb += self.n.d_valid.eq(self.n.i_ready & sdv)
672
673 return m
674
675
676 class BufferedHandshake(ControlBase):
677 """ buffered pipeline stage. data and strobe signals travel in sync.
678 if ever the input is ready and the output is not, processed data
679 is shunted in a temporary register.
680
681 Argument: stage. see Stage API above
682
683 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
684 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
685 stage-1 p.i_data >>in stage n.o_data out>> stage+1
686 | |
687 process --->----^
688 | |
689 +-- r_data ->-+
690
691 input data p.i_data is read (only), is processed and goes into an
692 intermediate result store [process()]. this is updated combinatorially.
693
694 in a non-stall condition, the intermediate result will go into the
695 output (update_output). however if ever there is a stall, it goes
696 into r_data instead [update_buffer()].
697
698 when the non-stall condition is released, r_data is the first
699 to be transferred to the output [flush_buffer()], and the stall
700 condition cleared.
701
702 on the next cycle (as long as stall is not raised again) the
703 input may begin to be processed and transferred directly to output.
704 """
705
706 def elaborate(self, platform):
707 self.m = ControlBase._elaborate(self, platform)
708
709 result = self.stage.ospec()
710 r_data = self.stage.ospec()
711
712 # establish some combinatorial temporaries
713 o_n_validn = Signal(reset_less=True)
714 n_i_ready = Signal(reset_less=True, name="n_i_rdy_data")
715 nir_por = Signal(reset_less=True)
716 nir_por_n = Signal(reset_less=True)
717 p_i_valid = Signal(reset_less=True)
718 nir_novn = Signal(reset_less=True)
719 nirn_novn = Signal(reset_less=True)
720 por_pivn = Signal(reset_less=True)
721 npnn = Signal(reset_less=True)
722 self.m.d.comb += [p_i_valid.eq(self.p.i_valid_test),
723 o_n_validn.eq(~self.n.o_valid),
724 n_i_ready.eq(self.n.i_ready_test),
725 nir_por.eq(n_i_ready & self.p._o_ready),
726 nir_por_n.eq(n_i_ready & ~self.p._o_ready),
727 nir_novn.eq(n_i_ready | o_n_validn),
728 nirn_novn.eq(~n_i_ready & o_n_validn),
729 npnn.eq(nir_por | nirn_novn),
730 por_pivn.eq(self.p._o_ready & ~p_i_valid)
731 ]
732
733 # store result of processing in combinatorial temporary
734 self.m.d.comb += eq(result, self.stage.process(self.p.i_data))
735
736 # if not in stall condition, update the temporary register
737 with self.m.If(self.p.o_ready): # not stalled
738 self.m.d.sync += eq(r_data, result) # update buffer
739
740 # data pass-through conditions
741 with self.m.If(npnn):
742 o_data = self._postprocess(result)
743 self.m.d.sync += [self.n.o_valid.eq(p_i_valid), # valid if p_valid
744 eq(self.n.o_data, o_data), # update output
745 ]
746 # buffer flush conditions (NOTE: can override data passthru conditions)
747 with self.m.If(nir_por_n): # not stalled
748 # Flush the [already processed] buffer to the output port.
749 o_data = self._postprocess(r_data)
750 self.m.d.sync += [self.n.o_valid.eq(1), # reg empty
751 eq(self.n.o_data, o_data), # flush buffer
752 ]
753 # output ready conditions
754 self.m.d.sync += self.p._o_ready.eq(nir_novn | por_pivn)
755
756 return self.m
757
758
759 class SimpleHandshake(ControlBase):
760 """ simple handshake control. data and strobe signals travel in sync.
761 implements the protocol used by Wishbone and AXI4.
762
763 Argument: stage. see Stage API above
764
765 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
766 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
767 stage-1 p.i_data >>in stage n.o_data out>> stage+1
768 | |
769 +--process->--^
770 Truth Table
771
772 Inputs Temporary Output Data
773 ------- ---------- ----- ----
774 P P N N PiV& ~NiR& N P
775 i o i o PoR NoV o o
776 V R R V V R
777
778 ------- - - - -
779 0 0 0 0 0 0 >0 0 reg
780 0 0 0 1 0 1 >1 0 reg
781 0 0 1 0 0 0 0 1 process(i_data)
782 0 0 1 1 0 0 0 1 process(i_data)
783 ------- - - - -
784 0 1 0 0 0 0 >0 0 reg
785 0 1 0 1 0 1 >1 0 reg
786 0 1 1 0 0 0 0 1 process(i_data)
787 0 1 1 1 0 0 0 1 process(i_data)
788 ------- - - - -
789 1 0 0 0 0 0 >0 0 reg
790 1 0 0 1 0 1 >1 0 reg
791 1 0 1 0 0 0 0 1 process(i_data)
792 1 0 1 1 0 0 0 1 process(i_data)
793 ------- - - - -
794 1 1 0 0 1 0 1 0 process(i_data)
795 1 1 0 1 1 1 1 0 process(i_data)
796 1 1 1 0 1 0 1 1 process(i_data)
797 1 1 1 1 1 0 1 1 process(i_data)
798 ------- - - - -
799 """
800
801 def elaborate(self, platform):
802 self.m = m = ControlBase._elaborate(self, platform)
803
804 r_busy = Signal()
805 result = self.stage.ospec()
806
807 # establish some combinatorial temporaries
808 n_i_ready = Signal(reset_less=True, name="n_i_rdy_data")
809 p_i_valid_p_o_ready = Signal(reset_less=True)
810 p_i_valid = Signal(reset_less=True)
811 m.d.comb += [p_i_valid.eq(self.p.i_valid_test),
812 n_i_ready.eq(self.n.i_ready_test),
813 p_i_valid_p_o_ready.eq(p_i_valid & self.p.o_ready),
814 ]
815
816 # store result of processing in combinatorial temporary
817 m.d.comb += eq(result, self.stage.process(self.p.i_data))
818
819 # previous valid and ready
820 with m.If(p_i_valid_p_o_ready):
821 o_data = self._postprocess(result)
822 m.d.sync += [r_busy.eq(1), # output valid
823 eq(self.n.o_data, o_data), # update output
824 ]
825 # previous invalid or not ready, however next is accepting
826 with m.Elif(n_i_ready):
827 o_data = self._postprocess(result)
828 m.d.sync += [eq(self.n.o_data, o_data)]
829 # TODO: could still send data here (if there was any)
830 #m.d.sync += self.n.o_valid.eq(0) # ...so set output invalid
831 m.d.sync += r_busy.eq(0) # ...so set output invalid
832
833 m.d.comb += self.n.o_valid.eq(r_busy)
834 # if next is ready, so is previous
835 m.d.comb += self.p._o_ready.eq(n_i_ready)
836
837 return self.m
838
839
840 class UnbufferedPipeline(ControlBase):
841 """ A simple pipeline stage with single-clock synchronisation
842 and two-way valid/ready synchronised signalling.
843
844 Note that a stall in one stage will result in the entire pipeline
845 chain stalling.
846
847 Also that unlike BufferedHandshake, the valid/ready signalling does NOT
848 travel synchronously with the data: the valid/ready signalling
849 combines in a *combinatorial* fashion. Therefore, a long pipeline
850 chain will lengthen propagation delays.
851
852 Argument: stage. see Stage API, above
853
854 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
855 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
856 stage-1 p.i_data >>in stage n.o_data out>> stage+1
857 | |
858 r_data result
859 | |
860 +--process ->-+
861
862 Attributes:
863 -----------
864 p.i_data : StageInput, shaped according to ispec
865 The pipeline input
866 p.o_data : StageOutput, shaped according to ospec
867 The pipeline output
868 r_data : input_shape according to ispec
869 A temporary (buffered) copy of a prior (valid) input.
870 This is HELD if the output is not ready. It is updated
871 SYNCHRONOUSLY.
872 result: output_shape according to ospec
873 The output of the combinatorial logic. it is updated
874 COMBINATORIALLY (no clock dependence).
875
876 Truth Table
877
878 Inputs Temp Output Data
879 ------- - ----- ----
880 P P N N ~NiR& N P
881 i o i o NoV o o
882 V R R V V R
883
884 ------- - - -
885 0 0 0 0 0 0 1 reg
886 0 0 0 1 1 1 0 reg
887 0 0 1 0 0 0 1 reg
888 0 0 1 1 0 0 1 reg
889 ------- - - -
890 0 1 0 0 0 0 1 reg
891 0 1 0 1 1 1 0 reg
892 0 1 1 0 0 0 1 reg
893 0 1 1 1 0 0 1 reg
894 ------- - - -
895 1 0 0 0 0 1 1 reg
896 1 0 0 1 1 1 0 reg
897 1 0 1 0 0 1 1 reg
898 1 0 1 1 0 1 1 reg
899 ------- - - -
900 1 1 0 0 0 1 1 process(i_data)
901 1 1 0 1 1 1 0 process(i_data)
902 1 1 1 0 0 1 1 process(i_data)
903 1 1 1 1 0 1 1 process(i_data)
904 ------- - - -
905
906 Note: PoR is *NOT* involved in the above decision-making.
907 """
908
909 def elaborate(self, platform):
910 self.m = m = ControlBase._elaborate(self, platform)
911
912 data_valid = Signal() # is data valid or not
913 r_data = self.stage.ospec() # output type
914
915 # some temporaries
916 p_i_valid = Signal(reset_less=True)
917 pv = Signal(reset_less=True)
918 buf_full = Signal(reset_less=True)
919 m.d.comb += p_i_valid.eq(self.p.i_valid_test)
920 m.d.comb += pv.eq(self.p.i_valid & self.p.o_ready)
921 m.d.comb += buf_full.eq(~self.n.i_ready_test & data_valid)
922
923 m.d.comb += self.n.o_valid.eq(data_valid)
924 m.d.comb += self.p._o_ready.eq(~data_valid | self.n.i_ready_test)
925 m.d.sync += data_valid.eq(p_i_valid | buf_full)
926
927 with m.If(pv):
928 m.d.sync += eq(r_data, self.stage.process(self.p.i_data))
929 o_data = self._postprocess(r_data)
930 m.d.comb += eq(self.n.o_data, o_data)
931
932 return self.m
933
934
935 class UnbufferedPipeline2(ControlBase):
936 """ A simple pipeline stage with single-clock synchronisation
937 and two-way valid/ready synchronised signalling.
938
939 Note that a stall in one stage will result in the entire pipeline
940 chain stalling.
941
942 Also that unlike BufferedHandshake, the valid/ready signalling does NOT
943 travel synchronously with the data: the valid/ready signalling
944 combines in a *combinatorial* fashion. Therefore, a long pipeline
945 chain will lengthen propagation delays.
946
947 Argument: stage. see Stage API, above
948
949 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
950 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
951 stage-1 p.i_data >>in stage n.o_data out>> stage+1
952 | | |
953 +- process-> buf <-+
954 Attributes:
955 -----------
956 p.i_data : StageInput, shaped according to ispec
957 The pipeline input
958 p.o_data : StageOutput, shaped according to ospec
959 The pipeline output
960 buf : output_shape according to ospec
961 A temporary (buffered) copy of a valid output
962 This is HELD if the output is not ready. It is updated
963 SYNCHRONOUSLY.
964
965 Inputs Temp Output Data
966 ------- - -----
967 P P N N ~NiR& N P (buf_full)
968 i o i o NoV o o
969 V R R V V R
970
971 ------- - - -
972 0 0 0 0 0 0 1 process(i_data)
973 0 0 0 1 1 1 0 reg (odata, unchanged)
974 0 0 1 0 0 0 1 process(i_data)
975 0 0 1 1 0 0 1 process(i_data)
976 ------- - - -
977 0 1 0 0 0 0 1 process(i_data)
978 0 1 0 1 1 1 0 reg (odata, unchanged)
979 0 1 1 0 0 0 1 process(i_data)
980 0 1 1 1 0 0 1 process(i_data)
981 ------- - - -
982 1 0 0 0 0 1 1 process(i_data)
983 1 0 0 1 1 1 0 reg (odata, unchanged)
984 1 0 1 0 0 1 1 process(i_data)
985 1 0 1 1 0 1 1 process(i_data)
986 ------- - - -
987 1 1 0 0 0 1 1 process(i_data)
988 1 1 0 1 1 1 0 reg (odata, unchanged)
989 1 1 1 0 0 1 1 process(i_data)
990 1 1 1 1 0 1 1 process(i_data)
991 ------- - - -
992
993 Note: PoR is *NOT* involved in the above decision-making.
994 """
995
996 def elaborate(self, platform):
997 self.m = m = ControlBase._elaborate(self, platform)
998
999 buf_full = Signal() # is data valid or not
1000 buf = self.stage.ospec() # output type
1001
1002 # some temporaries
1003 p_i_valid = Signal(reset_less=True)
1004 m.d.comb += p_i_valid.eq(self.p.i_valid_test)
1005
1006 m.d.comb += self.n.o_valid.eq(buf_full | p_i_valid)
1007 m.d.comb += self.p._o_ready.eq(~buf_full)
1008 m.d.sync += buf_full.eq(~self.n.i_ready_test & self.n.o_valid)
1009
1010 o_data = Mux(buf_full, buf, self.stage.process(self.p.i_data))
1011 if hasattr(self.stage, "postprocess"):
1012 o_data = self.stage.postprocess(o_data)
1013 m.d.comb += eq(self.n.o_data, o_data)
1014 m.d.sync += eq(buf, self.n.o_data)
1015
1016 return self.m
1017
1018
1019 class PassThroughStage(StageCls):
1020 """ a pass-through stage which has its input data spec equal to its output,
1021 and "passes through" its data from input to output.
1022 """
1023 def __init__(self, iospecfn):
1024 self.iospecfn = iospecfn
1025 def ispec(self): return self.iospecfn()
1026 def ospec(self): return self.iospecfn()
1027 def process(self, i): return i
1028
1029
1030 class PassThroughHandshake(ControlBase):
1031 """ A control block that delays by one clock cycle.
1032
1033 Inputs Temporary Output Data
1034 ------- ------------------ ----- ----
1035 P P N N PiV& PiV| NiR| pvr N P (pvr)
1036 i o i o PoR ~PoR ~NoV o o
1037 V R R V V R
1038
1039 ------- - - - - - -
1040 0 0 0 0 0 1 1 0 1 1 odata (unchanged)
1041 0 0 0 1 0 1 0 0 1 0 odata (unchanged)
1042 0 0 1 0 0 1 1 0 1 1 odata (unchanged)
1043 0 0 1 1 0 1 1 0 1 1 odata (unchanged)
1044 ------- - - - - - -
1045 0 1 0 0 0 0 1 0 0 1 odata (unchanged)
1046 0 1 0 1 0 0 0 0 0 0 odata (unchanged)
1047 0 1 1 0 0 0 1 0 0 1 odata (unchanged)
1048 0 1 1 1 0 0 1 0 0 1 odata (unchanged)
1049 ------- - - - - - -
1050 1 0 0 0 0 1 1 1 1 1 process(in)
1051 1 0 0 1 0 1 0 0 1 0 odata (unchanged)
1052 1 0 1 0 0 1 1 1 1 1 process(in)
1053 1 0 1 1 0 1 1 1 1 1 process(in)
1054 ------- - - - - - -
1055 1 1 0 0 1 1 1 1 1 1 process(in)
1056 1 1 0 1 1 1 0 0 1 0 odata (unchanged)
1057 1 1 1 0 1 1 1 1 1 1 process(in)
1058 1 1 1 1 1 1 1 1 1 1 process(in)
1059 ------- - - - - - -
1060
1061 """
1062
1063 def elaborate(self, platform):
1064 self.m = m = ControlBase._elaborate(self, platform)
1065
1066 r_data = self.stage.ospec() # output type
1067
1068 # temporaries
1069 p_i_valid = Signal(reset_less=True)
1070 pvr = Signal(reset_less=True)
1071 m.d.comb += p_i_valid.eq(self.p.i_valid_test)
1072 m.d.comb += pvr.eq(p_i_valid & self.p.o_ready)
1073
1074 m.d.comb += self.p.o_ready.eq(~self.n.o_valid | self.n.i_ready_test)
1075 m.d.sync += self.n.o_valid.eq(p_i_valid | ~self.p.o_ready)
1076
1077 odata = Mux(pvr, self.stage.process(self.p.i_data), r_data)
1078 m.d.sync += eq(r_data, odata)
1079 if hasattr(self.stage, "postprocess"):
1080 r_data = self.stage.postprocess(r_data)
1081 m.d.comb += eq(self.n.o_data, r_data)
1082
1083 return m
1084
1085
1086 class RegisterPipeline(UnbufferedPipeline):
1087 """ A pipeline stage that delays by one clock cycle, creating a
1088 sync'd latch out of o_data and o_valid as an indirect byproduct
1089 of using PassThroughStage
1090 """
1091 def __init__(self, iospecfn):
1092 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
1093
1094
1095 class FIFOControl(ControlBase):
1096 """ FIFO Control. Uses SyncFIFO to store data, coincidentally
1097 happens to have same valid/ready signalling as Stage API.
1098
1099 i_data -> fifo.din -> FIFO -> fifo.dout -> o_data
1100 """
1101
1102 def __init__(self, depth, stage, fwft=True, buffered=False):
1103 """ FIFO Control
1104
1105 * depth: number of entries in the FIFO
1106 * stage: data processing block
1107 * fwft : first word fall-thru mode (non-fwft introduces delay)
1108 * buffered: use buffered FIFO (introduces extra cycle delay)
1109
1110 NOTE 1: FPGAs may have trouble with the defaults for SyncFIFO
1111 (fwft=True, buffered=False)
1112
1113 NOTE 2: i_data *must* have a shape function. it can therefore
1114 be a Signal, or a Record, or a RecordObject.
1115
1116 data is processed (and located) as follows:
1117
1118 self.p self.stage temp fn temp fn temp fp self.n
1119 i_data->process()->result->flatten->din.FIFO.dout->flatten(o_data)
1120
1121 yes, really: flatten produces a Cat() which can be assigned to.
1122 this is how the FIFO gets de-flattened without needing a de-flatten
1123 function
1124 """
1125
1126 assert not (fwft and buffered), "buffered cannot do fwft"
1127 if buffered:
1128 depth += 1
1129 self.fwft = fwft
1130 self.buffered = buffered
1131 self.fdepth = depth
1132 ControlBase.__init__(self, stage=stage)
1133
1134 def elaborate(self, platform):
1135 self.m = m = ControlBase._elaborate(self, platform)
1136
1137 # make a FIFO with a signal of equal width to the o_data.
1138 (fwidth, _) = self.n.o_data.shape()
1139 if self.buffered:
1140 fifo = SyncFIFOBuffered(fwidth, self.fdepth)
1141 else:
1142 fifo = SyncFIFO(fwidth, self.fdepth, fwft=self.fwft)
1143 m.submodules.fifo = fifo
1144
1145 # store result of processing in combinatorial temporary
1146 result = self.stage.ospec()
1147 m.d.comb += eq(result, self.stage.process(self.p.i_data))
1148
1149 # connect previous rdy/valid/data - do flatten on i_data
1150 # NOTE: cannot do the PrevControl-looking trick because
1151 # of need to process the data. shaaaame....
1152 m.d.comb += [fifo.we.eq(self.p.i_valid_test),
1153 self.p.o_ready.eq(fifo.writable),
1154 eq(fifo.din, flatten(result)),
1155 ]
1156
1157 # connect next rdy/valid/data - do flatten on o_data
1158 connections = [self.n.o_valid.eq(fifo.readable),
1159 fifo.re.eq(self.n.i_ready_test),
1160 ]
1161 if self.fwft or self.buffered:
1162 m.d.comb += connections
1163 else:
1164 m.d.sync += connections # unbuffered fwft mode needs sync
1165 o_data = flatten(self.n.o_data).eq(fifo.dout)
1166 if hasattr(self.stage, "postprocess"):
1167 o_data = self.stage.postprocess(o_data)
1168 m.d.comb += o_data
1169
1170 return m