ChanServ changed the topic of #nmigen to: nMigen hardware description language · code at https://github.com/nmigen · logs at https://freenode.irclog.whitequark.org/nmigen · IRC meetings each Monday at 1800 UTC · next meeting TBD
lf has quit [Ping timeout: 268 seconds]
lf has joined #nmigen
<_whitenotifier> [YoWASP/yosys] whitequark pushed 1 commit to develop [+0/-0/±1] https://git.io/JL5md
<_whitenotifier> [YoWASP/yosys] whitequark 728f920 - Update dependencies.
<lsneff> whitequark: ready for some sample files :)
<_whitenotifier> [nmigen] anuejn reviewed pull request #547 commit - https://git.io/JL5Yx
<_whitenotifier> [nmigen] whitequark reviewed pull request #547 commit - https://git.io/JL5sW
electronic_eel has quit [Ping timeout: 240 seconds]
electronic_eel has joined #nmigen
<whitequark> lsneff: I should be able to get some by Monday or so, maybe earlier, maybe later as life allows
PyroPeter_ has joined #nmigen
PyroPeter has quit [Ping timeout: 240 seconds]
PyroPeter_ is now known as PyroPeter
Degi_ has joined #nmigen
Degi has quit [Ping timeout: 240 seconds]
Degi_ is now known as Degi
<lsneff> whitequark: no rush
falteckz has joined #nmigen
<falteckz> Should "reset" functionality go last in elaborate? That makes sense right, since with m.If(reset):, if the last thing, has higher precedence than the logic before it?
<whitequark> that's essentially how the built-in clock domain reset logic works
<whitequark> you might also want to look at ResetInserter
<whitequark> which is... again the same thing, but automated to be less error-prone
<falteckz> This is only for a partial reset of some hardware - where as ResetInserter is for a ClockDomain, or is that a misunderstanding?
<whitequark> correct. if you want a partial reset, adding it last in elaborate() is the right way to do it.
emeb_mac has quit [Quit: Leaving.]
<falteckz> In general, reset being combinatorial is fine?
<whitequark> hm
<whitequark> what do you mean by that?
<falteckz> with m.If(self.reset): \n m.d.comb += [ self.data_out.eq(0), self.mem_pointer.eq(0), self.ready.eq(0) ]
<falteckz> This behaves like a mux where reset is the input select, yes? and it essentially just masks data_out, mem_pointer and ready from their backing registers (when they are used with sync domain)
<whitequark> yes, that is correct
<whitequark> you can also only do this explicitly; ResetInserter or built-in logic will never 'reset' combinatorial signals in this way
<falteckz> built-in logic is just hitting reset on the flip-flops, yes?
<whitequark> yeah, it's pretty much just implicitly calling ResetInserter on every domain with a reset
<whitequark> the idea here is that you can only reset something that has state
<whitequark> and comb signals do not
<falteckz> I am not suspect of my logic - if I clear the reset line, the data is going to return? Where in reality I wanted the FF to clear too
<falteckz> s/not/now
<whitequark> what you said earlier (This behaves like a mux where reset is the input select) is exactly correct
<whitequark> whether this is the behavior you want I do not have enough information to say
<falteckz> Yeah I only considered the asserted case when evaluating the suitability. Is there an async reset for flip-flops?
<whitequark> yes, you can create a clock domain with an async reset
<whitequark> ClockDomain(async_reset=True)
<falteckz> My module could have it's own domain, based off the parent 'sync' domain, but with an async reset, that is designed to just reset the flops inside my module?
<falteckz> I'm building a shift register that takes in ints, and shifts out bits. I want to be able to reset the shifting parts of it.
<whitequark> yes, you could do that. in this case, I would create a local clock domain with an async reset (so that it will be visible just in that module), and do `m.d.comb += cd_mydomain.clk.eq(ClockSignal())` so that it uses the conventional `sync` clock
Bertl_oO is now known as Bertl_zZ
<whitequark> which exact reset arrangement you will build is up to you, but it can be nontrivial because async resets introduce more timing hazards
<falteckz> nextpnr is going to be able to find the longest path and warn about insufficient clock speed, yes?
<whitequark> it's mostly not about speed, but about asserting and releasing the reset
<falteckz> Ah yes, hold time and such aspects
<whitequark> yes. you are creating a new async control domain, and that is not just for show
<whitequark> for releasing the reset, ResetSynchronizer will help you. but for asserting it, there isn't anything generic
<whitequark> do you just want to have the output go to the reset value in the same cycle as the reset input is asserteD?
<falteckz> I do - yes. I don't want to have to assert and deassert reset across two clocks
<falteckz> period#1: Reset, period#2 Deassert Reset, and begin shifting data.
<falteckz> rather than Reset, Unreset, Shift
<whitequark> ok yeah you don't want async reset here
<whitequark> add a sync signal for storage, and a comb signal that muxes between the storage and the reset value
<whitequark> essentially what you wanted originally
<whitequark> then, reset them at the same time
<whitequark> or just this: https://paste.debian.net/1179381/
<falteckz> by calling ResetSignal(), reset is a signal added to the module?
<falteckz> or is reset always there?
<whitequark> ResetSignal() is the same as ResetSignal("sync"). here's how this works. you know how you add statements to a clock domain with m.d.sync (generally, m.d.<domain>) but the actual domain object isn't yet created at that time?
<falteckz> I didn't know the domain was lazy created
<whitequark> lazy creation is not quite how it works, but it is a good first approximation
<falteckz> So at elaboration time, domains do not exist
<whitequark> in m.d.<domain>, and in ResetSignal("<domain>"), and in a few other places, the domains are late bound: they are referred to by their name
<whitequark> you can create domains during elaboration yourself, and references to them will be bound after elaboration according to the places in the hierarchy where you add them
<whitequark> it's a bit like variable scoping
<falteckz> Makes sense
<falteckz> Heirachy being the nested submodules and their nested if statements and so on?
<whitequark> yeah
<whitequark> if you make any domains internal to a module, you should mark them as local
<whitequark> (this really ought to be the default, but, you know, legacy code)
<whitequark> local domains only propagate to submodules
<whitequark> non-local domains propagate upwards, except that if there is a name clash of two non-local domains at the same hierarchy level, they are deterministically renamed
jeanthom has joined #nmigen
<falteckz> m.d.domain = is how I make it local ?
<whitequark> no. ClockDomain(local=True)
daknig2 has joined #nmigen
daknig2 has quit [Changing host]
daknig2 has joined #nmigen
DaKnig is now known as DaKnig3
daknig2 is now known as DaKnig
DaKnig3 is now known as DaKnig2
DaKnig has quit [Client Quit]
DaKnig2 is now known as DaKnig
jeanthom has quit [Ping timeout: 264 seconds]
nfbraun has joined #nmigen
<falteckz> This will latch and mux in one go, right? https://paste.debian.net/hidden/e4a2c101/
<falteckz> async, bit_ctr and led_ptr get muxed, and then on next clock rise, the backing flops get written?
lambda has quit [Quit: WeeChat 3.0]
lambda has joined #nmigen
<falteckz> Ah I see that causes a driver conflict, I see why you had the extra signal
chipmuenk has joined #nmigen
<_whitenotifier> [nmigen] Laksen opened issue #570: Download link on Getting started doc page is broken - https://git.io/JL52j
jeanthom has joined #nmigen
korken89 has joined #nmigen
Laksen has joined #nmigen
jeanthom has quit [Ping timeout: 256 seconds]
<d1b2> <dub_dub_11> if I have two records, what is the best way to make a "macro" to connect the two?
<d1b2> <dub_dub_11> ah I think I found a way
<d1b2> <dub_dub_11> I made a function python def ac97_dac_connect(domain, source, sink): domain += [ sink.dac_tag.eq(source.dac_tag), ...
Bertl_zZ is now known as Bertl
modwizcode has joined #nmigen
modwizcode has quit [Quit: Going offline, see ya! (www.adiirc.com)]
modwizcode has joined #nmigen
emeb has joined #nmigen
cr1901_modern1 has joined #nmigen
cr1901_modern2 has joined #nmigen
cr1901_modern has quit [Ping timeout: 256 seconds]
modwizcode has quit [Quit: Later]
modwizcode has joined #nmigen
cr1901_modern1 has quit [Ping timeout: 260 seconds]
korken89 has quit [Quit: Ping timeout (120 seconds)]
Bertl is now known as Bertl_oO
cr1901_modern2 has quit [Quit: Leaving.]
cr1901_modern has joined #nmigen
chipmuenk has quit [Ping timeout: 260 seconds]
peeps[zen] has joined #nmigen
peepsalot has quit [Ping timeout: 260 seconds]
sakirious has joined #nmigen
peeps[zen] is now known as peepsalot
korken89 has joined #nmigen
Laksen has quit [Quit: Leaving]
jeanthom has joined #nmigen
<mithro> @whitequark - Was just pointing someone to the resource stuff in nmigen-boards and thought -- Did I ever share litespi's module stuff with you? https://github.com/litex-hub/litespi/blob/master/litespi/modules/generated_modules.py (https://github.com/litex-hub/litespi/blob/master/litespi/modules/modules.py)
emeb_mac has joined #nmigen
jeanthom has quit [Ping timeout: 240 seconds]
chipmuenk has joined #nmigen
modwizcode has quit [Quit: Later]
Chips4Makers has quit [Remote host closed the connection]
modwizcode has joined #nmigen
<korken89> Quick question on the `AsyncFIFOBuffered`, is it by design if the `w_level` jumps around a bit when writing to this FIFO? When locking in 3 words into it I get the `w_level` to be (per clock) 0, 1, 2, 4, 4, 4, 4, 3 (forever) - for some reason I first get 4 for 4 clock cycles before it goes back to the correct 3.
<vup> do you have a example for that?
<korken89> Sure, I can paste the trace
<vup> the file would be nice :)
<korken89> VCD + GTKW?
<vup> I mean the python file generating that
<vup> w_level should never be bigger than the actual number of words in the fifo
<korken89> I'll just clean it up to only show this
<korken89> There, I pushed it here https://github.com/korken89/ovio_core_firmware
<korken89> It only shows the affected FIFO when running `python -m gateware.ft600_sim`
<korken89> That should be generated
<korken89> I'm using the latest nmigen master
<vup> yeah got it
<korken89> Cool!
<agg> probably you already know this but just in case, you can add multiple statements to domains at once, like m.d.comb += [a.eq(b), c.eq(d)], including split over multiple lines, can be a bit neater than lots of m.d.comb +=
<agg> and 'yield Tick(domain="sync")' is the same as just saying 'yield'
<korken89> agg: Yeah, I'm just moving statements around so much right now :)
<agg> yea fair
<korken89> Oh, that one I did not know
<korken89> Thanks!
<agg> (also the first state of an FSM is its reset state, so you don't need to write reset=READY explicitly if you don't want)
<korken89> Neat
<korken89> There is a lot to learn :D
<vup> korken89: ok, so pretty sure this is kinda expected behaviour and kinda a bug
<korken89> Oh, a little of both :)
<vup> so first of all the fifo is using a AsyncFFSynchronizer here: https://github.com/nmigen/nmigen/blob/b466b724fe9f62140062afc9ecde9a920a261487/nmigen/lib/fifo.py#L516 and that should be just a FFSynchronizer, otherwise only one edge is synchronized
<vup> now furthermore, because of the async nature of the fifo, the w_level can not be perfect in all cases
<korken89> Yeah, then it would not be async
<vup> for example it takes some cycles for the read of one element to propagate to the write side of the logic / w_level
<korken89> What would change if one uses an `FFSynchronizer`?
<vup> only two cycles of 4
<korken89> Ah
<korken89> I mean, I personally see no issue here - I though it just did a conservative estimate until all synchronization was done
<vup> yep thats how it works
<korken89> Should I open an issue on this?
<vup> ill open a PR in a sec
<korken89> Cool!
<korken89> Thanks for the fast analysis and help! :D
<vup> but right now, only one edge of that internal signal is synchronized (the 1 -> 0 edge), which could cause metastability on the (0 -> 1) edge, so using AsyncFFSynchronizer is definitely wrong
<korken89> Makes sense
chipmuenk has quit [Quit: chipmuenk]
modwizcode has quit [Quit: Later]
<_whitenotifier> [nmigen] rroohhh opened pull request #571: AsyncFIFOBuffered {r,w}_level fixes - https://git.io/JL5hH
<vup> korken89: just 3 bugs found: https://github.com/nmigen/nmigen/pull/571
<korken89> Oh
<vup> well thank you for paying close attention to the fifo behaviour :)
<korken89> Great job!
<korken89> No problem :)
<_whitenotifier> [nmigen] codecov[bot] commented on pull request #571: AsyncFIFOBuffered {r,w}_level fixes - https://git.io/JL5hN
<_whitenotifier> [nmigen] codecov[bot] edited a comment on pull request #571: AsyncFIFOBuffered {r,w}_level fixes - https://git.io/JL5hN
<_whitenotifier> [nmigen] codecov[bot] edited a comment on pull request #571: AsyncFIFOBuffered {r,w}_level fixes - https://git.io/JL5hN
<_whitenotifier> [nmigen] codecov[bot] edited a comment on pull request #571: AsyncFIFOBuffered {r,w}_level fixes - https://git.io/JL5hN