remove print debug statements
[ieee754fpu.git] / src / add / singlepipe.py
1 """ Pipeline and BufferedPipeline implementation, conforming to the same API.
2 For multi-input and multi-output variants, see multipipe.
3
4 eq:
5 --
6
7 a strategically very important function that is identical in function
8 to nmigen's Signal.eq function, except it may take objects, or a list
9 of objects, or a tuple of objects, and where objects may also be
10 Records.
11
12 Stage API:
13 ---------
14
15 stage requires compliance with a strict API that may be
16 implemented in several means, including as a static class.
17 the methods of a stage instance must be as follows:
18
19 * ispec() - Input data format specification
20 returns an object or a list or tuple of objects, or
21 a Record, each object having an "eq" function which
22 takes responsibility for copying by assignment all
23 sub-objects
24 * ospec() - Output data format specification
25 requirements as for ospec
26 * process(m, i) - Processes an ispec-formatted object
27 returns a combinatorial block of a result that
28 may be assigned to the output, by way of the "eq"
29 function
30 * setup(m, i) - Optional function for setting up submodules
31 may be used for more complex stages, to link
32 the input (i) to submodules. must take responsibility
33 for adding those submodules to the module (m).
34 the submodules must be combinatorial blocks and
35 must have their inputs and output linked combinatorially.
36
37 Both StageCls (for use with non-static classes) and Stage (for use
38 by static classes) are abstract classes from which, for convenience
39 and as a courtesy to other developers, anything conforming to the
40 Stage API may *choose* to derive.
41
42 StageChain:
43 ----------
44
45 A useful combinatorial wrapper around stages that chains them together
46 and then presents a Stage-API-conformant interface. By presenting
47 the same API as the stages it wraps, it can clearly be used recursively.
48
49 RecordBasedStage:
50 ----------------
51
52 A convenience class that takes an input shape, output shape, a
53 "processing" function and an optional "setup" function. Honestly
54 though, there's not much more effort to just... create a class
55 that returns a couple of Records (see ExampleAddRecordStage in
56 examples).
57
58 PassThroughStage:
59 ----------------
60
61 A convenience class that takes a single function as a parameter,
62 that is chain-called to create the exact same input and output spec.
63 It has a process() function that simply returns its input.
64
65 Instances of this class are completely redundant if handed to
66 StageChain, however when passed to UnbufferedPipeline they
67 can be used to introduce a single clock delay.
68
69 ControlBase:
70 -----------
71
72 The base class for pipelines. Contains previous and next ready/valid/data.
73 Also has an extremely useful "connect" function that can be used to
74 connect a chain of pipelines and present the exact same prev/next
75 ready/valid/data API.
76
77 UnbufferedPipeline:
78 ------------------
79
80 A simple stalling clock-synchronised pipeline that has no buffering
81 (unlike BufferedPipeline). Data flows on *every* clock cycle when
82 the conditions are right (this is nominally when the input is valid
83 and the output is ready).
84
85 A stall anywhere along the line will result in a stall back-propagating
86 down the entire chain. The BufferedPipeline by contrast will buffer
87 incoming data, allowing previous stages one clock cycle's grace before
88 also having to stall.
89
90 An advantage of the UnbufferedPipeline over the Buffered one is
91 that the amount of logic needed (number of gates) is greatly
92 reduced (no second set of buffers basically)
93
94 The disadvantage of the UnbufferedPipeline is that the valid/ready
95 logic, if chained together, is *combinatorial*, resulting in
96 progressively larger gate delay.
97
98 RegisterPipeline:
99 ----------------
100
101 A convenience class that, because UnbufferedPipeline introduces a single
102 clock delay, when its stage is a PassThroughStage, it results in a Pipeline
103 stage that, duh, delays its (unmodified) input by one clock cycle.
104
105 BufferedPipeline:
106 ----------------
107
108 nmigen implementation of buffered pipeline stage, based on zipcpu:
109 https://zipcpu.com/blog/2017/08/14/strategies-for-pipelining.html
110
111 this module requires quite a bit of thought to understand how it works
112 (and why it is needed in the first place). reading the above is
113 *strongly* recommended.
114
115 unlike john dawson's IEEE754 FPU STB/ACK signalling, which requires
116 the STB / ACK signals to raise and lower (on separate clocks) before
117 data may proceeed (thus only allowing one piece of data to proceed
118 on *ALTERNATE* cycles), the signalling here is a true pipeline
119 where data will flow on *every* clock when the conditions are right.
120
121 input acceptance conditions are when:
122 * incoming previous-stage strobe (p.i_valid) is HIGH
123 * outgoing previous-stage ready (p.o_ready) is LOW
124
125 output transmission conditions are when:
126 * outgoing next-stage strobe (n.o_valid) is HIGH
127 * outgoing next-stage ready (n.i_ready) is LOW
128
129 the tricky bit is when the input has valid data and the output is not
130 ready to accept it. if it wasn't for the clock synchronisation, it
131 would be possible to tell the input "hey don't send that data, we're
132 not ready". unfortunately, it's not possible to "change the past":
133 the previous stage *has no choice* but to pass on its data.
134
135 therefore, the incoming data *must* be accepted - and stored: that
136 is the responsibility / contract that this stage *must* accept.
137 on the same clock, it's possible to tell the input that it must
138 not send any more data. this is the "stall" condition.
139
140 we now effectively have *two* possible pieces of data to "choose" from:
141 the buffered data, and the incoming data. the decision as to which
142 to process and output is based on whether we are in "stall" or not.
143 i.e. when the next stage is no longer ready, the output comes from
144 the buffer if a stall had previously occurred, otherwise it comes
145 direct from processing the input.
146
147 this allows us to respect a synchronous "travelling STB" with what
148 dan calls a "buffered handshake".
149
150 it's quite a complex state machine!
151 """
152
153 from nmigen import Signal, Cat, Const, Mux, Module, Value
154 from nmigen.cli import verilog, rtlil
155 from nmigen.hdl.ast import ArrayProxy
156 from nmigen.hdl.rec import Record, Layout
157
158 from abc import ABCMeta, abstractmethod
159 from collections.abc import Sequence
160
161
162 class PrevControl:
163 """ contains signals that come *from* the previous stage (both in and out)
164 * i_valid: previous stage indicating all incoming data is valid.
165 may be a multi-bit signal, where all bits are required
166 to be asserted to indicate "valid".
167 * o_ready: output to next stage indicating readiness to accept data
168 * i_data : an input - added by the user of this class
169 """
170
171 def __init__(self, i_width=1):
172 self.i_valid = Signal(i_width, name="p_i_valid") # prev >>in self
173 self.o_ready = Signal(name="p_o_ready") # prev <<out self
174 self.i_data = None # XXX MUST BE ADDED BY USER
175
176 def _connect_in(self, prev):
177 """ internal helper function to connect stage to an input source.
178 do not use to connect stage-to-stage!
179 """
180 return [self.i_valid.eq(prev.i_valid),
181 prev.o_ready.eq(self.o_ready),
182 eq(self.i_data, prev.i_data),
183 ]
184
185 def i_valid_logic(self):
186 vlen = len(self.i_valid)
187 if vlen > 1: # multi-bit case: valid only when i_valid is all 1s
188 all1s = Const(-1, (len(self.i_valid), False))
189 return self.i_valid == all1s
190 # single-bit i_valid case
191 return self.i_valid
192
193
194 class NextControl:
195 """ contains the signals that go *to* the next stage (both in and out)
196 * o_valid: output indicating to next stage that data is valid
197 * i_ready: input from next stage indicating that it can accept data
198 * o_data : an output - added by the user of this class
199 """
200 def __init__(self):
201 self.o_valid = Signal(name="n_o_valid") # self out>> next
202 self.i_ready = Signal(name="n_i_ready") # self <<in next
203 self.o_data = None # XXX MUST BE ADDED BY USER
204
205 def connect_to_next(self, nxt):
206 """ helper function to connect to the next stage data/valid/ready.
207 data/valid is passed *TO* nxt, and ready comes *IN* from nxt.
208 use this when connecting stage-to-stage
209 """
210 return [nxt.i_valid.eq(self.o_valid),
211 self.i_ready.eq(nxt.o_ready),
212 eq(nxt.i_data, self.o_data),
213 ]
214
215 def _connect_out(self, nxt):
216 """ internal helper function to connect stage to an output source.
217 do not use to connect stage-to-stage!
218 """
219 return [nxt.o_valid.eq(self.o_valid),
220 self.i_ready.eq(nxt.i_ready),
221 eq(nxt.o_data, self.o_data),
222 ]
223
224
225 def eq(o, i):
226 """ makes signals equal: a helper routine which identifies if it is being
227 passed a list (or tuple) of objects, or signals, or Records, and calls
228 the objects' eq function.
229
230 complex objects (classes) can be used: they must follow the
231 convention of having an eq member function, which takes the
232 responsibility of further calling eq and returning a list of
233 eq assignments
234
235 Record is a special (unusual, recursive) case, where the input may be
236 specified as a dictionary (which may contain further dictionaries,
237 recursively), where the field names of the dictionary must match
238 the Record's field spec. Alternatively, an object with the same
239 member names as the Record may be assigned: it does not have to
240 *be* a Record.
241
242 ArrayProxy is also special-cased, it's a bit messy: whilst ArrayProxy
243 has an eq function, the object being assigned to it (e.g. a python
244 object) might not. despite the *input* having an eq function,
245 that doesn't help us, because it's the *ArrayProxy* that's being
246 assigned to. so.... we cheat. use the ports() function of the
247 python object, enumerate them, find out the list of Signals that way,
248 and assign them.
249 """
250 if not isinstance(o, Sequence):
251 o, i = [o], [i]
252 res = []
253 for (ao, ai) in zip(o, i):
254 #print ("eq", ao, ai)
255 if isinstance(ao, Record):
256 for idx, (field_name, field_shape, _) in enumerate(ao.layout):
257 if isinstance(field_shape, Layout):
258 val = ai.fields
259 else:
260 val = ai
261 if hasattr(val, field_name): # check for attribute
262 val = getattr(val, field_name)
263 else:
264 val = val[field_name] # dictionary-style specification
265 rres = eq(ao.fields[field_name], val)
266 res += rres
267 elif isinstance(ao, ArrayProxy) and not isinstance(ai, Value):
268 for p in ai.ports():
269 op = getattr(ao, p.name)
270 #print (op, p, p.name)
271 rres = op.eq(p)
272 if not isinstance(rres, Sequence):
273 rres = [rres]
274 res += rres
275 else:
276 rres = ao.eq(ai)
277 if not isinstance(rres, Sequence):
278 rres = [rres]
279 res += rres
280 return res
281
282
283 class StageCls(metaclass=ABCMeta):
284 """ Class-based "Stage" API. requires instantiation (after derivation)
285
286 see "Stage API" above.. Note: python does *not* require derivation
287 from this class. All that is required is that the pipelines *have*
288 the functions listed in this class. Derivation from this class
289 is therefore merely a "courtesy" to maintainers.
290 """
291 @abstractmethod
292 def ispec(self): pass # REQUIRED
293 @abstractmethod
294 def ospec(self): pass # REQUIRED
295 #@abstractmethod
296 #def setup(self, m, i): pass # OPTIONAL
297 @abstractmethod
298 def process(self, i): pass # REQUIRED
299
300
301 class Stage(metaclass=ABCMeta):
302 """ Static "Stage" API. does not require instantiation (after derivation)
303
304 see "Stage API" above. Note: python does *not* require derivation
305 from this class. All that is required is that the pipelines *have*
306 the functions listed in this class. Derivation from this class
307 is therefore merely a "courtesy" to maintainers.
308 """
309 @staticmethod
310 @abstractmethod
311 def ispec(): pass
312
313 @staticmethod
314 @abstractmethod
315 def ospec(): pass
316
317 #@staticmethod
318 #@abstractmethod
319 #def setup(m, i): pass
320
321 @staticmethod
322 @abstractmethod
323 def process(i): pass
324
325
326 class RecordBasedStage(Stage):
327 """ convenience class which provides a Records-based layout.
328 honestly it's a lot easier just to create a direct Records-based
329 class (see ExampleAddRecordStage)
330 """
331 def __init__(self, in_shape, out_shape, processfn, setupfn=None):
332 self.in_shape = in_shape
333 self.out_shape = out_shape
334 self.__process = processfn
335 self.__setup = setupfn
336 def ispec(self): return Record(self.in_shape)
337 def ospec(self): return Record(self.out_shape)
338 def process(seif, i): return self.__process(i)
339 def setup(seif, m, i): return self.__setup(m, i)
340
341
342 class StageChain(StageCls):
343 """ pass in a list of stages, and they will automatically be
344 chained together via their input and output specs into a
345 combinatorial chain.
346
347 the end result basically conforms to the exact same Stage API.
348
349 * input to this class will be the input of the first stage
350 * output of first stage goes into input of second
351 * output of second goes into input into third (etc. etc.)
352 * the output of this class will be the output of the last stage
353 """
354 def __init__(self, chain):
355 self.chain = chain
356
357 def ispec(self):
358 return self.chain[0].ispec()
359
360 def ospec(self):
361 return self.chain[-1].ospec()
362
363 def setup(self, m, i):
364 for (idx, c) in enumerate(self.chain):
365 if hasattr(c, "setup"):
366 c.setup(m, i) # stage may have some module stuff
367 o = self.chain[idx].ospec() # only the last assignment survives
368 m.d.comb += eq(o, c.process(i)) # process input into "o"
369 if idx != len(self.chain)-1:
370 ni = self.chain[idx+1].ispec() # becomes new input on next loop
371 m.d.comb += eq(ni, o) # assign output to next input
372 i = ni
373 self.o = o # last loop is the output
374
375 def process(self, i):
376 return self.o # conform to Stage API: return last-loop output
377
378
379 class ControlBase:
380 """ Common functions for Pipeline API
381 """
382 def __init__(self, in_multi=None):
383 """ Base class containing ready/valid/data to previous and next stages
384
385 * p: contains ready/valid to the previous stage
386 * n: contains ready/valid to the next stage
387
388 Except when calling Controlbase.connect(), user must also:
389 * add i_data member to PrevControl (p) and
390 * add o_data member to NextControl (n)
391 """
392
393 # set up input and output IO ACK (prev/next ready/valid)
394 self.p = PrevControl(in_multi)
395 self.n = NextControl()
396
397 def connect_to_next(self, nxt):
398 """ helper function to connect to the next stage data/valid/ready.
399 """
400 return self.n.connect_to_next(nxt.p)
401
402 def _connect_in(self, prev):
403 """ internal helper function to connect stage to an input source.
404 do not use to connect stage-to-stage!
405 """
406 return self.p._connect_in(prev.p)
407
408 def _connect_out(self, nxt):
409 """ internal helper function to connect stage to an output source.
410 do not use to connect stage-to-stage!
411 """
412 return self.n._connect_out(nxt.n)
413
414 def connect(self, m, pipechain):
415 """ connects a chain (list) of Pipeline instances together and
416 links them to this ControlBase instance:
417
418 in <----> self <---> out
419 | ^
420 v |
421 [pipe1, pipe2, pipe3, pipe4]
422 | ^ | ^ | ^
423 v | v | v |
424 out---in out--in out---in
425
426 Also takes care of allocating i_data/o_data, by looking up
427 the data spec for each end of the pipechain. i.e It is NOT
428 necessary to allocate self.p.i_data or self.n.o_data manually:
429 this is handled AUTOMATICALLY, here.
430
431 Basically this function is the direct equivalent of StageChain,
432 except that unlike StageChain, the Pipeline logic is followed.
433
434 Just as StageChain presents an object that conforms to the
435 Stage API from a list of objects that also conform to the
436 Stage API, an object that calls this Pipeline connect function
437 has the exact same pipeline API as the list of pipline objects
438 it is called with.
439
440 Thus it becomes possible to build up larger chains recursively.
441 More complex chains (multi-input, multi-output) will have to be
442 done manually.
443 """
444 eqs = [] # collated list of assignment statements
445
446 # connect inter-chain
447 for i in range(len(pipechain)-1):
448 pipe1 = pipechain[i]
449 pipe2 = pipechain[i+1]
450 eqs += pipe1.connect_to_next(pipe2)
451
452 # connect front of chain to ourselves
453 front = pipechain[0]
454 #self.p.i_data = front.stage.ispec()
455 eqs += front._connect_in(self)
456
457 # connect end of chain to ourselves
458 end = pipechain[-1]
459 #self.n.o_data = end.stage.ospec()
460 eqs += end._connect_out(self)
461
462 # activate the assignments
463 m.d.comb += eqs
464
465 def set_input(self, i):
466 """ helper function to set the input data
467 """
468 return eq(self.p.i_data, i)
469
470 def ports(self):
471 res = [self.p.i_valid, self.n.i_ready,
472 self.n.o_valid, self.p.o_ready,
473 ]
474 if hasattr(self.p.i_data, "ports"):
475 res += self.p.i_data.ports()
476 else:
477 res += self.p.i_data
478 if hasattr(self.n.o_data, "ports"):
479 res += self.n.o_data.ports()
480 else:
481 res += self.n.o_data
482 return res
483
484
485 class BufferedPipeline(ControlBase):
486 """ buffered pipeline stage. data and strobe signals travel in sync.
487 if ever the input is ready and the output is not, processed data
488 is shunted in a temporary register.
489
490 Argument: stage. see Stage API above
491
492 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
493 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
494 stage-1 p.i_data >>in stage n.o_data out>> stage+1
495 | |
496 process --->----^
497 | |
498 +-- r_data ->-+
499
500 input data p.i_data is read (only), is processed and goes into an
501 intermediate result store [process()]. this is updated combinatorially.
502
503 in a non-stall condition, the intermediate result will go into the
504 output (update_output). however if ever there is a stall, it goes
505 into r_data instead [update_buffer()].
506
507 when the non-stall condition is released, r_data is the first
508 to be transferred to the output [flush_buffer()], and the stall
509 condition cleared.
510
511 on the next cycle (as long as stall is not raised again) the
512 input may begin to be processed and transferred directly to output.
513
514 """
515 def __init__(self, stage):
516 ControlBase.__init__(self)
517 self.stage = stage
518
519 # set up the input and output data
520 self.p.i_data = stage.ispec() # input type
521 self.n.o_data = stage.ospec()
522
523 def elaborate(self, platform):
524 m = Module()
525
526 result = self.stage.ospec()
527 r_data = self.stage.ospec()
528 if hasattr(self.stage, "setup"):
529 self.stage.setup(m, self.p.i_data)
530
531 # establish some combinatorial temporaries
532 o_n_validn = Signal(reset_less=True)
533 i_p_valid_o_p_ready = Signal(reset_less=True)
534 p_i_valid = Signal(reset_less=True)
535 m.d.comb += [p_i_valid.eq(self.p.i_valid_logic()),
536 o_n_validn.eq(~self.n.o_valid),
537 i_p_valid_o_p_ready.eq(p_i_valid & self.p.o_ready),
538 ]
539
540 # store result of processing in combinatorial temporary
541 m.d.comb += eq(result, self.stage.process(self.p.i_data))
542
543 # if not in stall condition, update the temporary register
544 with m.If(self.p.o_ready): # not stalled
545 m.d.sync += eq(r_data, result) # update buffer
546
547 with m.If(self.n.i_ready): # next stage is ready
548 with m.If(self.p.o_ready): # not stalled
549 # nothing in buffer: send (processed) input direct to output
550 m.d.sync += [self.n.o_valid.eq(p_i_valid),
551 eq(self.n.o_data, result), # update output
552 ]
553 with m.Else(): # p.o_ready is false, and something is in buffer.
554 # Flush the [already processed] buffer to the output port.
555 m.d.sync += [self.n.o_valid.eq(1), # declare reg empty
556 eq(self.n.o_data, r_data), # flush buffer
557 self.p.o_ready.eq(1), # clear stall condition
558 ]
559 # ignore input, since p.o_ready is also false.
560
561 # (n.i_ready) is false here: next stage is ready
562 with m.Elif(o_n_validn): # next stage being told "ready"
563 m.d.sync += [self.n.o_valid.eq(p_i_valid),
564 self.p.o_ready.eq(1), # Keep the buffer empty
565 eq(self.n.o_data, result), # set output data
566 ]
567
568 # (n.i_ready) false and (n.o_valid) true:
569 with m.Elif(i_p_valid_o_p_ready):
570 # If next stage *is* ready, and not stalled yet, accept input
571 m.d.sync += self.p.o_ready.eq(~(p_i_valid & self.n.o_valid))
572
573 return m
574
575
576 class UnbufferedPipeline(ControlBase):
577 """ A simple pipeline stage with single-clock synchronisation
578 and two-way valid/ready synchronised signalling.
579
580 Note that a stall in one stage will result in the entire pipeline
581 chain stalling.
582
583 Also that unlike BufferedPipeline, the valid/ready signalling does NOT
584 travel synchronously with the data: the valid/ready signalling
585 combines in a *combinatorial* fashion. Therefore, a long pipeline
586 chain will lengthen propagation delays.
587
588 Argument: stage. see Stage API, above
589
590 stage-1 p.i_valid >>in stage n.o_valid out>> stage+1
591 stage-1 p.o_ready <<out stage n.i_ready <<in stage+1
592 stage-1 p.i_data >>in stage n.o_data out>> stage+1
593 | |
594 r_data result
595 | |
596 +--process ->-+
597
598 Attributes:
599 -----------
600 p.i_data : StageInput, shaped according to ispec
601 The pipeline input
602 p.o_data : StageOutput, shaped according to ospec
603 The pipeline output
604 r_data : input_shape according to ispec
605 A temporary (buffered) copy of a prior (valid) input.
606 This is HELD if the output is not ready. It is updated
607 SYNCHRONOUSLY.
608 result: output_shape according to ospec
609 The output of the combinatorial logic. it is updated
610 COMBINATORIALLY (no clock dependence).
611 """
612
613 def __init__(self, stage):
614 ControlBase.__init__(self)
615 self.stage = stage
616
617 # set up the input and output data
618 self.p.i_data = stage.ispec() # input type
619 self.n.o_data = stage.ospec() # output type
620
621 def elaborate(self, platform):
622 m = Module()
623
624 data_valid = Signal() # is data valid or not
625 r_data = self.stage.ispec() # input type
626 if hasattr(self.stage, "setup"):
627 self.stage.setup(m, r_data)
628
629 p_i_valid = Signal(reset_less=True)
630 m.d.comb += p_i_valid.eq(self.p.i_valid_logic())
631 m.d.comb += self.n.o_valid.eq(data_valid)
632 m.d.comb += self.p.o_ready.eq(~data_valid | self.n.i_ready)
633 m.d.sync += data_valid.eq(p_i_valid | \
634 (~self.n.i_ready & data_valid))
635 with m.If(self.p.i_valid & self.p.o_ready):
636 m.d.sync += eq(r_data, self.p.i_data)
637 m.d.comb += eq(self.n.o_data, self.stage.process(r_data))
638 return m
639
640
641 class PassThroughStage(StageCls):
642 """ a pass-through stage which has its input data spec equal to its output,
643 and "passes through" its data from input to output.
644 """
645 def __init__(self, iospec):
646 self.iospecfn = iospecfn
647 def ispec(self): return self.iospecfn()
648 def ospec(self): return self.iospecfn()
649 def process(self, i): return i
650
651
652 class RegisterPipeline(UnbufferedPipeline):
653 """ A pipeline stage that delays by one clock cycle, creating a
654 sync'd latch out of o_data and o_valid as an indirect byproduct
655 of using PassThroughStage
656 """
657 def __init__(self, iospecfn):
658 UnbufferedPipeline.__init__(self, PassThroughStage(iospecfn))
659