ChanServ changed the topic of #nmigen to: nMigen hardware description language · code at · logs at
peepsalot has joined #nmigen
<peepsalot> hi, is this an ok place to ask general migen questions? i'm new to this and a little confused about nmigen vs migen
<awygle> don't see why not
<peepsalot> Also just curious how closely litex and migen projects are linked. i have a de10-nano board, and saw that someone recently added support in litex, but not to base migen
<awygle> my understanding is that this is a complex topic with some fairly strong feelings behind it, but basically, litex is, or was, a fork of migen which came about due to differences of opinion between maintainers, while nmigen is a new HDL similar to migen but taking into account the lessons learned from migen.
<awygle> litex also includes a bunch of useful abstractions and utility libraries over and above the base of migen
<emily> litex is also moving to nmigen
<emily> aiui
<awygle> there's been some discussion that that would be Good but i'm not aware of any specific plans or roadmaps for that. if you are, please share :)
<emily> which i suspect will shift the active community centre of gravity away from migen pretty quickly
<emily> hmm, maybe i misheard, or maybe i shouldn't have shared that, not sure which :p
<emily> at the very least there's plans for official nmigen compatibility shims
<awygle> official to nmigen or official to litex?
<emily> my understanding is that that is with an eye towards migration
<emily> latter
<awygle> yeah this all basically jives with my understanding too
<emily> it came up in ##openfpga when the topic of a community-maintained litex-nmigen that did the necessary compat seds came up, basically "probably don't because that'll be official litex soon"
<emily> but this is just from my memory
<awygle> maybe i'm just more hesitant to assume such things will actually happen :p
<awygle> but i certainly have heard some discussion around all this. i'd love it if there was a specific "here's what needs to be done" roadmap - i would be very interested in helping out with execution if there was
<awygle> (cc _florent_, whitequark)
<awygle> not that i'm unwilling to be involved in the design aspects as well but i don't want to step on anybody's toes
<awygle> .... anyway hope that answers your questions peepsalot :p sorry we got off on a bit of a tear there
proteus-guy has joined #nmigen
<peepsalot> yes that helps, thanks
<peepsalot> i haven't dug into codebases yet, but I'm wondering if any particular project is collecting generic modules that can be used to build up peices/stages of a cpu core, like if you could plug an ALU, L1 cache, etc. to build up a new type of core from scratch, or are those typically developed separately for each core?
<peepsalot> is there a generic open FPU design yet?
<awygle> i don't know. i'd be surprised if there was no open FPU, but the rest of that stuff i wouldn't be surprised if it was pretty specific. but i rarely work on/with CPU stuff so i don't know
<peepsalot> i'm also curious if migen can be used to help with RE of intel FPGA? or i guess it probably too high-level / pre-placement to be useful in the sort of fuzzing needed?
futarisIRCcloud has joined #nmigen
<awygle> is there support in nmigen for moving through multiple FSM states at once, in a single clock cycle?
<awygle> say i have states A, B, and C, and transitions A->B and B->C with guards "foo == 1" and "bar == 1". i'm in state A, and foo and bar both rise on the same cycle. in the next cycle, i am in state C. is that possible, or do i have to duplicate the "bar == 1" guard in A with an A->C transition?
<ktemkin> any kind of “sequencing” that’s not on the clock edge would be outside the FSM model (and potentially dangerous territory to start thinking in; since the possibility space explodes combinatorially when you allow things to chain)
<awygle> yeah, that's more or less what i figured
<Sarayan> if I understand sync designs correctly (and I don't), foo and bar don't actually "rise", they happen to be 1 on a given clock edge
<ktemkin> you’re better off explicitly thinking of it in terms of the decision tree that it is
<awygle> it's so many LOC tho :p
<Sarayan> the LOC count is a compiler issue, not a design issue
<ktemkin> you can still use python abstraction to reduce duplication
<awygle> mm, spose that's true. i'll try to do that
<Sarayan> hey, since there's live people, I have a design issue in nmigen, as in, I don't know how to design a thing
<awygle> still not used to comingling the python-y bits and the hdl-y bits
<awygle> sure, shoot
<ktemkin> Sarayan: re: “rise”, that’s just a way of saying a value was e.g 0 one cycle and 1 the next
<Sarayan> I have a device with two groups of ports and one ram. I have a clock, and two "phase" signals, which are 1 on specific clock rising edges, and alternating (but maybe with gaps, depends on the rest of the system)
<ktemkin> it changed, so you know it rose somewhere between the last clock edge and (a setup time before) the current one
<Sarayan> ktemkin: yeah, but in contrast with verilog/vhdl, it's not an event, it's just a state. The only event is the vlock
<Sarayan> you have to think as in "here's the state when the clock event happens, what is my next state" and not "when that state change happens do X"
<Sarayan> pretty much what you say
<Sarayan> anyway, so I have these two phase selectors, and that ram
<ktemkin> (in most designs, those ‘events’ generally only exist in simulation; anyway — in most cases, syntaxes like rising_edge are just hints at what the clock is)
<Sarayan> I have two address ports, two data ports, r/w lines, chip select lines
<Sarayan> I'd like to have the chip answer to requests on the ports getting the address/select/rw on the associated phase signal, and on reads giving the result on the next phase, knowing that the ram in the middle is shared
<Sarayan> and between comb and sync and Memory, I'm failing at making something that works, and in particular can do back-to-back ram accesses
<Sarayan> and I'm not sure what's my failings and when cxxrlt fucks up
<Sarayan> cxxrtl
<Sarayan> so my question is, how should I design such a thing? Should I use sync or async ram ports? Should I comb or sync the address and data ports of the ram? How do I ensure that the output data ports don't blink between 0 and the expected value when the other side is running?
<ktemkin> I’m on my phone and not in a good position to respond, but
<ktemkin> it sounds like you have a two-stage pipeline, with “decode” (provide addresses) as your first stage and “read-out” (get data) as your second
<ktemkin> on the read side
<ktemkin> and then the same pipeline with "decode" (provide addresses) as your first stage and "write" as the second, on the write
<ktemkin> your read addresses would alternate between A and B; and then produce an output on the next clock cycle -- so you'd have [A address, B output], followed by [B address, A output], flipping back and forth constantly
<ktemkin> to indicate when that output is valid, you have two decent choices and one kinda-icky one: you can register the outputs and incur a one-cycle delay
<ktemkin> but have the output values always be valid
<ktemkin> you can provide a validity signal (like a chip select) that indicates when the output can be considered valid, which will be high effectively every other read
<ktemkin> or, if you're e.g. leaving your clock domain and have the timing slack, you can capture those output values on the non-active edge of the clock
<ktemkin> [you could also in theory use a combinational read port, but usually this is challenging to efficiently map to real hardware]
<ktemkin> dunno if that's a lucid enough explanation to be helpful; I'm exhausted and about to fall asleep
<ktemkin> (there's also a good summary of what technologies support async/"comb-domain" read and where here:
<Sarayan> there's no read or write sie, any side can do read or write
<Sarayan> side
<Sarayan> I can have a one-clock delay, but outputs must be stable, that's pretty much a requirement (68000 on one side, video pipeline on the other fwiw :-)
<Sarayan> but I never do two accesses at the same time, it's alternating
<ktemkin> yeah; that’s what I’m describing — you have two different modules sharing a read port and then two different ones sharing a write port; both alternate as described above (and thus effectively each gets a read port and a write port at half the clock rate)
<ktemkin> anyways, sleep for me now
<Sarayan> enjoy sleep
Asu has joined #nmigen
<zignig> awygle: ping.
<ZirconiumX> peepsalot: It's entirely feasible to do it all with nMigen
<peepsalot> ZirconiumX: seems like ability to use python would be pretty convenient if so :)
<_whitenotifier-3> [nmigen] mszep opened issue #327: pysim: option to capture simulation output in a python object -
<_whitenotifier-3> [nmigen] whitequark commented on issue #327: pysim: option to capture simulation output in a python object -
<awygle> zignig: pong
<_whitenotifier-3> [nmigen] mszep commented on issue #327: pysim: option to capture simulation output in a python object -
<_whitenotifier-3> [nmigen] mszep edited a comment on issue #327: pysim: option to capture simulation output in a python object -
Degi has joined #nmigen
<_whitenotifier-3> [nmigen] awygle commented on issue #327: pysim: option to capture simulation output in a python object -
emily has quit [Ping timeout: 240 seconds]
<awygle> hm, is there a way to return a state machine state block from a function? mimicing the effect of the with m.State thing?
<ktemkin> depends what you mean by "return a bock"
<ktemkin> you can create a function that manipulates a module, e.g. adding a state
<awygle> spose i could take m as an argument yeah
electronic_eel has quit [Ping timeout: 258 seconds]
electronic_eel has joined #nmigen
electronic_eel has quit [Ping timeout: 255 seconds]
electronic_eel has joined #nmigen
_whitelogger has joined #nmigen
Asu has quit [Ping timeout: 272 seconds]
<zignig> awygle: I have managed to pass modules to a function to add logic after the fact.
<zignig> awygle: I am getting close to binding your stream code, looks good so far.
<zignig> the only thing that _might_ be a thing is to have a reset and a stall signal on the stream. That said , base "stream" that can be extended would probably be better.
<awygle> To stall you just drop "valid" or "ready" depending
<awygle> I thought about reset but my hope is that streams don't necessarily have storage. The FIFO stream should definitely expose reset though
<zignig> even if they don't have storage they may contain state inside the modules that could do with a reset.
<zignig> it depends on usage I suppose, just having a composable skid buffer is a useful construct anyway ;)
<awygle> as i think about it the module may or may not need to be reset but that doesn't have anything to do with the stream interface. what if a module takes in multiple streams, or has a stream and a bus interface? which one gets to be the reset?
<awygle> you're not the first one to say it tho so i could certainly be wrong
<zignig> that's a good point... the module may need a reset , but it should be done as control , not in the streaming interface.
<awygle> whitequark: Sarayan: either of you around? want to ask a bit about cxxsim vs pysim for an application