ChanServ changed the topic of #nmigen to: nMigen hardware description language · code at https://github.com/nmigen · logs at https://freenode.irclog.whitequark.org/nmigen · IRC meetings each Monday at 1800 UTC · next meeting August 24th
jeanthom has quit [Ping timeout: 258 seconds]
jaseg has quit [Ping timeout: 240 seconds]
jaseg has joined #nmigen
peepsalot has joined #nmigen
peeps[zen] has quit [Ping timeout: 258 seconds]
nelgau has quit [Remote host closed the connection]
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 256 seconds]
jaseg has quit [Ping timeout: 240 seconds]
jaseg has joined #nmigen
electronic_eel has quit [Ping timeout: 240 seconds]
electronic_eel has joined #nmigen
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±7] https://git.io/JUUj7
<_whitenotifier-3> [nmigen/nmigen] whitequark 0802f94 - lib.cdc: in AsyncFFSynchronizer(), rename domain= to o_domain=.
<_whitenotifier-3> [nmigen] whitequark closed issue #467: Inconsistency between a parameter name for AsyncFFSynchronizer and FFSynchronizer - https://git.io/JJiVT
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUUjx
<_whitenotifier-3> [nmigen/nmigen] whitequark 9096090 - Deploying to gh-pages from @ 0802f943ba5c0919b07659c3cad4da14c32f3264 🚀
DaKnig has quit [Ping timeout: 240 seconds]
DaKnig has joined #nmigen
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 258 seconds]
PyroPeter_ has joined #nmigen
PyroPeter has quit [Ping timeout: 265 seconds]
PyroPeter_ is now known as PyroPeter
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 246 seconds]
Degi has quit [Ping timeout: 240 seconds]
Degi has joined #nmigen
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±2] https://git.io/JUTv0
<_whitenotifier-3> [nmigen/nmigen] whitequark cb81618 - sim._pyrtl: fix miscompilation of -(Const(0b11, 2).as_signed()).
<_whitenotifier-3> [nmigen] whitequark closed issue #473: Signed math on Cat gives incorrect results - https://git.io/JJ71R
<_whitenotifier-3> [nmigen] whitequark commented on issue #473: Signed math on Cat gives incorrect results - https://git.io/JUTvE
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUTvg
<_whitenotifier-3> [nmigen/nmigen] whitequark 4da4359 - Deploying to gh-pages from @ cb81618c28d9cbcf1713bbcf632c740a12b30d45 🚀
Yehowshua has joined #nmigen
<Yehowshua> Creating an nMigen memory did not seem to map down to BRAMs on ice40 with Yosys
<whitequark> Yehowshua: what ports are you using?
<Yehowshua> 1 read port, 1 write port
<whitequark> sync or async?
<Yehowshua> Looking at PicoSOC, Wolf uses explicit SPram
<whitequark> hm
<Yehowshua> um, read is comb, write is sync
<whitequark> BRAM and SPRAM are different
<whitequark> ah
<whitequark> ok so, two things
<whitequark> first, BRAM only has synchronous read ports
<whitequark> if you request a combinatorial read port you'll get it bitblasted
<whitequark> second, SPRAM, in case you want to use that, is not inferred by Yosys at all at the moment
<Yehowshua> Ah ok
<Yehowshua> by bitblasted, do you mean implemented all in logic/DFFs?
<whitequark> yeah
<whitequark> it would be more correct terminology-wise to call it "FFRAM" (in Yosys terminology that is)
<whitequark> FFRAM = memory in FFs, LUTRAM = memory in repurposed LUTs, also known as "DRAM", BRAM = memory in dedicated blocks
<Yehowshua> gotcha
<Yehowshua> FPGAs are so painful...
<Yehowshua> progress, so slow
<whitequark> mmm
<whitequark> the issue you're currently facing is what I'd call "essential complexity"
<whitequark> i.e. it's not just a tooling issue, it's there for a reason. the domain is complex
<Yehowshua> yup yup
<whitequark> we have https://github.com/nmigen/nmigen/issues/14 but I don't think it would help you much
<Yehowshua> well, here's to my second can of redbull
<Yehowshua> what night is this now?
<whitequark> you can also do `mem.attrs["ram_block"] = 1`, which (in Yosys and Synplify) will force the memory to be synthesized as BRAM
<whitequark> or fail the synthesis entirely
<whitequark> (well, that attribute is in 1364.1, but vendors don't always respect that anyway...)
<Yehowshua> that snippet is nmigen?
<whitequark> yes
<Yehowshua> coolio
<whitequark> in Verilog it'd be (*ram_block*)
<Yehowshua> well this is helpful, thx
Yehowshua has quit [Remote host closed the connection]
<_whitenotifier-3> [nmigen] pepijndevos commented on issue #473: Signed math on Cat gives incorrect results - https://git.io/JUTf4
lkcl has quit [Ping timeout: 265 seconds]
hitomi2507 has joined #nmigen
<DaKnig> I have wrote the simplest VGA signal generator that I could; and it does not work on the FPGA (but it does in sim!). what should I check? I am kinda desperate at this point.
<d1b2> <Darius> how does it not work?
<d1b2> <Darius> (what are the symptoms)
<DaKnig> I double and triple checked the resource description for the VGA board, I tried changing the pin numbering to 0-indexing (ofc this didnt work as wq said)
<d1b2> <emeb> how are you testing it on hw?
<DaKnig> @Darius it synthesizes fine, the screen doesnt detect the signal as valid
<DaKnig> I have connected a monitor that accepts VGA to it
<d1b2> <Darius> do you see sync pulses?
<d1b2> <Darius> (on a scope)
<DaKnig> I have confidense that the monitor works
<DaKnig> no I dont have a scope
<DaKnig> sadly
<d1b2> <Darius> what sort of simulation are you doing?
<d1b2> <Darius> obviously you can't plug your monitor into the simulator so..
<DaKnig> I could upload the source files (just two) but I doubt anybody has exactly those specific hardware components
<DaKnig> I used a very simple simulation; connected this to pysim, generated some frames, looked at it via gtkwave
<d1b2> <emeb> I had a similar issue with my vga design. There were a few different issues: 1) The timing I used was off just enough that the monitor rejected it. 2) One of my tries used the ice40 HFOSC which has too much jitter for a stable picture. Switching to a crystal timing sourrce got it working.
<DaKnig> the system requires no input and should , even if the state is broken, return to a valid state *at some point*
<DaKnig> @Darius but I can see the hcount, vcount at the start and end of the sync pulses, width of the frame etc
<DaKnig> @emeb I had similar issues before; usually the symptoms show that the screen would kinda detect something, but be very unstable. nothing of this kind here.
<d1b2> <emeb> yes - that's what I saw also - lots of image distortion, dropping in / out.
<DaKnig> nothing of this kind here
<d1b2> <emeb> (when using HFOSC)
<d1b2> <emeb> Are you certain the monitor supports the resolution & refresh rate you're trying to generate?
<DaKnig> I doubt any monitor from the last 20 years does not support 800x600
<DaKnig> at 60fps
<d1b2> <emeb> You might be surprised. 🙂
<DaKnig> I am now trying to invert sync polarity; see whawt it does (should not do anything, since monitors detect and invert that themselves)
<d1b2> <Darius> if there's anything you can assume it's that computer hardware is garbage
<whitequark> DaKnig: if you're uncertain whether the pinout is correct, try setting pins high one at a time and use your multimeter to determine if the right pin changes value
<DaKnig> not sure my cheap multimeter would detect changes @60fps
<DaKnig> but I'll try this
<DaKnig> ... I should really get a scope
<emeb_mac> +1
<DaKnig> I am using this timing diagram (and the numbers from another place): http://www.vga-avr.narod.ru/vga_timing/vga_timing.gif
<DaKnig> in this the hysnc and vsync rise/time fall does not coincide
<DaKnig> I saw other methods where they do
<DaKnig> should still work, I think.
emeb_mac has quit [Quit: Leaving.]
<DaKnig> I now get an image
<DaKnig> and a stable one.
<DaKnig> the marking on the VGA board was wrong
<DaKnig> I flipped hsync and vsync and it all works
<DaKnig> 🤦
<DaKnig> for reference - that's the diligent VGA PMOD board
<DaKnig> they probably wrote that somewhere where I didnt see that online.
<DaKnig> ok the error was not with the board; it's with the nmigen-boards file
<DaKnig> I'll make a PR
<d1b2> <Darius> nice find
<_whitenotifier-3> [nmigen-boards] DaKnig opened pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTkl
<DaKnig> more soon?
<DaKnig> what if I didnt notice *all* them pins to fix
<DaKnig> maybe I'll be just as fristrated in a few mins when the color bits are inverted or whatever
<d1b2> <Darius> heh
<DaKnig> ok dont accept yet
<DaKnig> guess what I found
<DaKnig> :)
<_whitenotifier-3> [nmigen-boards] DaKnig commented on pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTkM
<_whitenotifier-3> [nmigen-boards] DaKnig closed pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTkl
nelgau has joined #nmigen
<DaKnig> can I somehow give the pmod port a name?
<DaKnig> on the board and in the manual they are called JA and JB
<DaKnig> would be nice to have this as a convenience instead of P0 and P1
<DaKnig> (it says that in the comments of the board file even; would be nice to have on the nmigen code level)
nelgau has quit [Ping timeout: 240 seconds]
lkcl has joined #nmigen
<DaKnig> I would really like to be 100% sure on that one; if anybody can check just one pin, to see that my change is sane and does not break stuff for no reason (and its not all in my head), I'd really thank them.
<DaKnig> maybe I just messed it up. but after triple checking and triple checking again I doubt that.
<DaKnig> but you never know.
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±2] https://github.com/nmigen/nmigen/compare/cb81618c28d9...00026c6e4a0d
<_whitenotifier-3> [nmigen/nmigen] whitequark 00026c6 - hdl.ast: avoid unnecessary sign padding in ArrayProxy.
<_whitenotifier-3> [nmigen] whitequark commented on pull request #486: Fix ArrayProxy shape for signed numbers - https://git.io/JUTIC
<_whitenotifier-3> [nmigen] whitequark closed pull request #486: Fix ArrayProxy shape for signed numbers - https://github.com/nmigen/nmigen/pull/486
<_whitenotifier-3> [nmigen] whitequark edited a comment on pull request #486: Fix ArrayProxy shape for signed numbers - https://git.io/JUTIC
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUTI8
<_whitenotifier-3> [nmigen/nmigen] whitequark 042d475 - Deploying to gh-pages from @ 00026c6e4a0d3f9c2eff054b1bd4d8d18342a2ed 🚀
<_whitenotifier-3> [nmigen] whitequark edited a comment on pull request #486: Fix ArrayProxy shape for signed numbers - https://git.io/JUTIC
<_whitenotifier-3> [nmigen-boards] DaKnig reopened pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTkl
<_whitenotifier-3> [nmigen-boards] DaKnig synchronize pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTkl
<_whitenotifier-3> [nmigen] whitequark commented on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUTI6
<_whitenotifier-3> [nmigen-boards] DaKnig commented on pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTIy
<_whitenotifier-3> [nmigen-boards] whitequark closed pull request #107: fixed PMOD 1 (JB) connections 7-10 - https://git.io/JUTkl
<_whitenotifier-3> [nmigen/nmigen-boards] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JUTIQ
<_whitenotifier-3> [nmigen/nmigen-boards] DaKnig 68cbc9a - arty_z7: fix PMOD 1 (JB) pinout.
<DaKnig> at what frequency does nmigen-boards release newer versions to the PyPI stable release?
<DaKnig> (it doesnt change anything at all for me since I use a custom board file anyways)
<whitequark> the plan is to release every git commit, but that's not implemented yet
<whitequark> it's on my roadmap to do this soon
<Sarayan> push a release in the ci?
<whitequark> yup
<Sarayan> cool :-)
<_whitenotifier-3> [nmigen] pepijndevos commented on pull request #486: Fix ArrayProxy shape for signed numbers - https://git.io/JUTLL
<_whitenotifier-3> [nmigen] pepijndevos commented on commit 00026c6e4a0d3f9c2eff054b1bd4d8d18342a2ed - https://git.io/JUTL0
<moony> wondering how i'd best implement a bit count (counting the number of bits set in a signal). Anyone have any ideas?
<moony> trying to make it single-clock
<Sarayan> popcount?
<moony> mhm. Is there a built-in op for that or do I need to figure out how to write it myself
<lkcl> moony: popcount needs a tree.
<lkcl> https://git.libre-soc.org/?p=soc.git;a=blob;f=src/soc/fu/logical/popcount.py;hb=HEAD
<moony> or some LUTs. a LUT8 would work here as a FPGA-friendly substitute.
<moony> just don't see a way to build a LUT8 so the tree method will have to do
<lkcl> the formal proof is here: https://git.libre-soc.org/?p=soc.git;a=blob;f=src/soc/fu/logical/formal/proof_main_stage.py;hb=HEAD
<lkcl> we use a "naive, simple, easy-to-read, easy-to-understand" formal proof
<lkcl> which is blindingly obvious to verify (add each bit of the value, duh) and horribly inefficient
<lkcl> but we don't care: it's not going to end up as RTL
<whitequark> lkcl: sum(signal)
<whitequark> er sorry
<whitequark> moony: ^
<lkcl> whitequark: ah! nice trick!
<moony> thanks
<lkcl> whitequark: does it create an efficient hierarchical tree?
<whitequark> nope, it creates a maximally unbalanced one. abc then turns it into something much better
<moony> it probably just lets the op- eys
<whitequark> moony: tbh i'm not completely sure this will translate to the optimal structure
<lkcl> there was a reason why the microwatt team (IBM research) did a tree. they left comments in the code.
<moony> meh
<moony> worst case
<lkcl> i translated the algorithm to nmigen
<whitequark> but it's a good first attempt
<moony> i go back to fix it later
<moony> :p
<moony> in my case, as this is a purely FPGA design, I could use a LUT8 for good efficiency later
<lkcl> :)
<whitequark> this is actually more tricky on FPGAs
<lkcl> one line of code _is_ compelling :)
<whitequark> you don't necessarily want to build a tree of adders
<moony> LUT8 is the cheat way, just a big lookup table for the output from my 8 bit input
<Sarayan> what's your target fpga? Can it really do lut8?
<moony> Sarayan: Can't, but can construct it from LUT4s
<moony> (ECP5)
<moony> still better than the tree afaict
<moony> note i'm still new to this, so this is an assumption as to it's efficiency.
<whitequark> moony: if you write something that translates to a 8-input 1-output function, the LUT mapper will stuff it into a LUT8
<whitequark> in general
<moony> whitequark: hmm, good to know
<moony> as I can simplify this to a 8-in-1-out
<whitequark> there are some subtleties, e.g. I think Yosys abc did not have cost functions for wide LUTs set up quite right
<moony> will tweak later
<whitequark> another subtlety is that if you write a tree of *adders*, and the synth script turns those into carry chain primitives, the LUT mapper might not be able to look through that
<moony> oh, correction, i can simplify this to a LUT9. Which isn't quite so simple
<moony> and probably wouldn't get built
<whitequark> you can't instantiate a LUT9, but you can make a lookup table with an Array or Switch
<whitequark> which will translate to something hopefully optimal
<moony> will just have to see
<_whitenotifier-3> [nmigen] whitequark commented on pull request #486: Fix ArrayProxy shape for signed numbers - https://git.io/JUTtP
<DaKnig> moony: re: pop count, maybe just `sum(Signal())`
<moony> whitequark: Wait, how would an Array even work here? I can index into one with a signal??
<DaKnig> ah crap somebody posted that already. I should read all comments before saying anything.
<moony> oh
<moony> right, Array != List
<DaKnig> lkcl: does it make any difference at all?
<DaKnig> I mean, sure, in theory the tree should be better because you are doing the optimizer's job for it...
<DaKnig> wouldnt it notice that the job could be done in a tree-like shape with the data dependency tree?
<DaKnig> moony: for popcount of 8 bits, bit0 = reduce(xor,signal), bit3 = reduce(and,signal)
<DaKnig> or something like that :)
<DaKnig> whitequark: is there a collection of common `Resource`s describing pluggable add-on boards?
<DaKnig> would it make sense to have such a thing? I think it certainly does, if some pmod board is common enough to have it in nmigen-boards or some similar place
<whitequark> currently there isn't
awe00 has joined #nmigen
<_whitenotifier-3> [nmigen] whitequark edited a comment on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUTI6
<moony> wish there was some way to visualize the output of the thing i'm working on
<moony> as in, a display
<_whitenotifier-3> [nmigen] whitequark commented on issue #479: Add `proc -nomux` to Yosys and migrate to it - https://git.io/JUTqh
<_whitenotifier-3> [nmigen] whitequark closed issue #489: nMigen doesn't like clock domains with underscores - https://git.io/JUUDq
<whitequark> moony: you could make a GUI with e.g. pyqt, and drive it from a simulator process
sorear has quit [Read error: Connection reset by peer]
sorear has joined #nmigen
_florent_ has quit [Ping timeout: 246 seconds]
ianloic_ has quit [Ping timeout: 246 seconds]
ianloic_ has joined #nmigen
_florent_ has joined #nmigen
jeanthom has joined #nmigen
<_whitenotifier-3> [nmigen] whitequark commented on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUTYh
<_whitenotifier-3> [nmigen] whitequark edited a comment on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUTYh
<_whitenotifier-3> [nmigen] whitequark commented on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUTOk
Asu has joined #nmigen
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JUT3I
<_whitenotifier-3> [nmigen/nmigen] whitequark 38b75ba - back.cxxrtl: actualize Yosys version requirement.
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUT3Y
<_whitenotifier-3> [nmigen/nmigen] whitequark 6ce60f8 - Deploying to gh-pages from @ 38b75ba4bce2bf20e970f7d6027965a13dd0065c 🚀
nelgau has joined #nmigen
<_whitenotifier-3> [nmigen] cr1901 synchronize pull request #461: SSH Client Support via Paramiko - https://git.io/JJarq
nelgau has quit [Ping timeout: 256 seconds]
<_whitenotifier-3> [nmigen] cr1901 reviewed pull request #461 commit - https://git.io/JUTsu
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to xilinx-bufg [+0/-0/±2] https://git.io/JUTZm
<_whitenotifier-3> [nmigen/nmigen] whitequark 438edf4 - vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate.
<_whitenotifier-3> [nmigen] whitequark opened pull request #490: vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate - https://git.io/JUTZn
<_whitenotifier-3> [nmigen] whitequark commented on issue #438: wrong type of buffer primitive used in series 7 - https://git.io/JUTZc
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 265 seconds]
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 246 seconds]
awe00 has quit [Ping timeout: 258 seconds]
awe00 has joined #nmigen
<DaKnig> moony: what are you trying to do?
<DaKnig> whitequark: why is defining a case after default a problem?
<DaKnig> priority?
<whitequark> yes, it'll be unreachable
<DaKnig> why is routing that hard, even for small designs/
<DaKnig> ?
<DaKnig> small and slow designs
<DaKnig> I guess that question is more suited for ##fpga...
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 264 seconds]
<lkcl> DaKnig: it's... well, it's just a convention in the design of switch/case statement parsers
<lkcl> there's some discussion i recall, about the types of rules/conventions to follow. "default is the last thing" will be in all of them.
<whitequark> lkcl: it's not just a convention, and many other languages with a switch statement do *not* do this
<whitequark> in c and c++ the order of the cases does not matter insofar as you are not using fallthrough
<whitequark> and it's actually somewhat common to put default: first when it does a `break` or an `abort()` or something like that
<whitequark> in general, there are two kinds of designs for multiple-arm branches
<whitequark> the first one makes each arm guarded by a comparison (semantically, it might well be lowered to a lookup table)
<whitequark> that's c, c++, java, their general school of design
<whitequark> in these languages, the switch statement can only be controlled by a primitive expression. int, enum, char, that kind of thing. because, given multiple different choices, a value of a primitive type will only be equal to one of them, the order of comparisons doesn't matter. indeed, this is what makes it possible to transform the switch into a lookup table
<whitequark> the second one makes each arm guarded by a pattern
<whitequark> that's haskell, ocaml, *sorta* ruby, lisps, their general school of design
<whitequark> in these languages, being able to branch on just ints and int equivalents is considered insufficient, so the match statement is controlled by an expression that can have a complex user-defined type
<whitequark> here, the order of branches *does* matter, for any (or all) of the following reasons
<whitequark> - maybe your language includes patterns for user-defined types in its syntax. if so, the patterns certainly let you specify wildcards. once you have wildcards, you also have the ability to write two patterns that match the same value, in which case you have to disambiguate somehow.
<whitequark> ocaml and lisps do this
<whitequark> - maybe your language defines the match statement to perform comparisons with a user-defined function. that, also, allows you to write two patterns that match the same value, so same problem
<whitequark> ruby does this. python does this too, if you squint--it doesn't have a multiple-arm branch statement, but it does have if/elif chains, and ruby's case statement is just a glorified if/elif chain
<whitequark> - maybe it's both, because your language has a match statement that lets you guard the branches with arbitrary code
<whitequark> anyway, with the second kind of design, you usually don't add a default branch to this statement *explicitly*. instead, you observe that you can put a branch that's always true (a pattern that's just a wildcard, or a comparison that always succeeds) at the end, and it behaves exactly like a default branch should
<whitequark> verilog, of course, has both: case and casex/casez
<whitequark> nmigen has the second kind of design
<DaKnig> its important sometimes to encode which kind you want to use if its a fast lang like C- I guess on the level at which nmigen operates it does not matter since the backend would lower it down correctly to what you mean
<whitequark> DaKnig: unfortunately, that's not *quite* the case
<whitequark> well
<DaKnig> well Im just not used to this school of thought; Im used to C/++ and such
<whitequark> right so if you don't use wildcards in nmigen switches, you'll get something equivalent to the first case
<whitequark> you don't have to specifically say "i want a comparison-like case statement", the toolchain is not that dumb
<DaKnig> that is what I meant
<whitequark> right, so in that case, when you write `with m.Default():`, it *still* becomes a wildcard internally, so it *has* to be last
<whitequark> but in every other respect it's semantically identical to how a C or C++ default: works
<DaKnig> would be interesting to have Default's behavior depend on the controlling expression type
<whitequark> this can be contrasted to Verilog, where the default statement is special-cased so that, even though the order of cases matters, the order of default wrt cases does not
<whitequark> which... is actually something we could do in nmigen if we wanted
<whitequark> maybe it'll reduce confusion without the need for a lint
jeanthom has quit [Ping timeout: 240 seconds]
<whitequark> DaKnig: hrm, but nmigen doesn't have types
<DaKnig> it has Records, Consts, Signals. when I am thinking about making another type that would fit well with those, I always hit the wall where nmigen does not really like having different types- its all Values (if I understand correctly) and interchangable
<whitequark> correct
<DaKnig> I wanted to make an Integer class ( and I probably will later) that takes the min and max values to reduce the complexity of hardware produced
<DaKnig> if you have Integer(range(10)) and you test for >8 then it should only test bits 3 and 0
<whitequark> Recrods, Consts and Signals aren't "types" so much as they are "syntax elements". they are comparable to Python tuples, literals, and variables
jeanthom has joined #nmigen
<DaKnig> this class could have benefitted from having a custom Default of its own- case 0... case 2... default - treats all values that are outside the defined range as dont-care
<DaKnig> something like that
pepijndevos has quit [Ping timeout: 260 seconds]
<whitequark> right, I see what you mean
<whitequark> there are a few issues with this idea
<DaKnig> I might ask some questions here about that later
<DaKnig> about how to convert that type to Signal for .eq purposes
<DaKnig> and such
<DaKnig> I dont see how that could be achieved
<DaKnig> in a typed lang, yo could define a "conversion operator" or a "conversion constructor" or w/e
<DaKnig> but not in python :)
<whitequark> as in, you want to do Integer().eq(...) ?
<DaKnig> Signal(..).eq(Integer())
<whitequark> oh!
<DaKnig> the other way around is simple
<whitequark> ValueCastable will let you do that
<DaKnig> just .eq the internal one
pepijndevos has joined #nmigen
<DaKnig> ah I see
<DaKnig> good design :) I was gonna suggest that if it wasnt implemented already.
<whitequark> it ended up being surprisingly important
<DaKnig> well of course! it simplifies a lot of code at the very least
<whitequark> people are enthusiastic about extending nMigen's set of operations, and I am enthusiastic about keeping the core language very small
<whitequark> ValueCastable is an excellent compromise
<whitequark> ha, i just made pysim ~10% faster
<whitequark> you'll *never* guess how
<DaKnig> do tell
Yehowshua has joined #nmigen
<Yehowshua> Did you upgrade your computer?
<DaKnig> she said you'll never guess how, so no
<DaKnig> :)
Yehowshua has quit [Remote host closed the connection]
jeanthom has quit [Remote host closed the connection]
jeanthom has joined #nmigen
Yehowshua has joined #nmigen
<Yehowshua> Not only that, she also embossed never
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to xilinx-bufg [+0/-0/±1] https://git.io/JUTB5
<_whitenotifier-3> [nmigen/nmigen] whitequark d8845e2 - sim._pyrtl: optimize uses of reflexive operators.
<_whitenotifier-3> [nmigen] whitequark synchronize pull request #490: vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate - https://git.io/JUTZn
<whitequark> argh, wrong branch
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JUTBd
<_whitenotifier-3> [nmigen/nmigen] whitequark 8c6c364 - sim._pyrtl: optimize uses of reflexive operators.
<_whitenotifier-3> [nmigen/nmigen] whitequark created branch xilinx-bufg https://git.io/JUTBF
<_whitenotifier-3> [nmigen] whitequark synchronize pull request #490: vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate - https://git.io/JUTZn
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUTBA
<_whitenotifier-3> [nmigen/nmigen] whitequark 8ad5286 - Deploying to gh-pages from @ 8c6c3643cd1c504d1eff08e8293af032c6330ea9 🚀
<DaKnig> wait, just changing the order made the difference?
<whitequark> yep
<DaKnig> LOL
<DaKnig> how
<DaKnig> what
<whitequark> remember how i talked yesterday about cpython having an optimizer?
<DaKnig> yes
<DaKnig> it is dumb?
<whitequark> it is very dumb.
<whitequark> but
<whitequark> it still optimizes :D
<DaKnig> when it feels like
<DaKnig> optimizing
<DaKnig> it seems
<DaKnig> :)
<DaKnig> but how did that help here
<whitequark> okay so
<whitequark> if you do `x + 0` do you know what __add__ method will get called? no. it's something on x, and x can be whatever
<whitequark> if you do `0 + x` do you know that? yep, it's int.__add__
<DaKnig> ah tail call optimization
<DaKnig> I see
<whitequark> nope
<DaKnig> no?
<whitequark> nop
<whitequark> if you just benchmark `x + 0` vs `0 + x` you'll see no difference
<whitequark> but! if x itself includes constants then the optimizer might be able to fold those
<whitequark> for example, 0 + 0 + x is the same as 0 + x
<whitequark> pysim emits a bunch of code that's fairly redundant
<whitequark> things like this: test_0 = 1 & (((1 & (1 & (slots[637].curr >> 15))) << 0))
<DaKnig> but doesnt pysim try to opimize the output code on its own?
<DaKnig> "when left or right operand is known at compile time, and it is (a special value) then (optimize instruction out)"
<whitequark> it doesn't
<whitequark> i mean, maybe it should
<DaKnig> ok
<whitequark> but i just got a 10% perf increase with 0% pysim complexity increase
<whitequark> this is literally infinite benefit/cost ratio :p
<DaKnig> if cxxsim would output this, the very smart C/++ compiler would optimize all this right away (because it can infer types and what not)
<DaKnig> yeah very nice!
<whitequark> oh, that's how cxxrtl works
<whitequark> cxxrtl itself is a template monstrosity that *also* produces naive code on its own
<whitequark> without optimizations it is almost as slow as pysim!
<DaKnig> whats cxxrtl?
<whitequark> the core of cxxsim
<whitequark> c++ library that lives in yosys and implements the actual simulation
<DaKnig> well, when did you compile code without any optimizations last time? :)
<Yehowshua> whitequark, when was CXXRTL introduced to Yosys?
<whitequark> DaKnig: probably the last time i needed to use a debugger, tbh
<whitequark> the reason you might be tempted to turn off optimizations in cxxsim is that compiling cxxsim source files can take ... well, i've seen it churn for 15 minutes
<whitequark> though that was a pathological case, and i eventually redesigned some parts of cxxrtl so that it wouldn't do that
<whitequark> Yehowshua: it was initially merged on 2020-04-10
<Yehowshua> hm...
<Yehowshua> Was it originally created with the idea it would be used with nMigen?
<Yehowshua> I remember on twitter you usually mentioned it hinting at nMigen
<whitequark> yes
<Yehowshua> But from looking at the library, its general enough to be used in a verilator like fashion
<whitequark> also yes
<DaKnig> whitequark: use -Og, no -O0
<DaKnig> optimizes the code a bit but still debuggable
<DaKnig> yeah those compile times sound like compiling C++ :)
Yehowshua has quit [Remote host closed the connection]
<whitequark> hm, yeah, you're right about -Og. i think i mostly learned c++ tooling before that switch came into existence
<Lofty> I've found -Og to still be tricky to debug
jeanthom has quit [Ping timeout: 258 seconds]
<Lofty> But I'm one of those "println! as debugging tool" type people
jeanthom has joined #nmigen
<DaKnig> looks like you are also one of those Rust people :)
<moony> yea, println! and dbg! make up my Rust debug flow as well, as I basically never need gdb
<moony> wasn't dbg! good enough someone backported it to C++?
<Lofty> I like how C++ is a backport here
<DaKnig> arent those two basicaly like the C++ `std::cout<< ();` and `std::cerr << ();`? :)
<DaKnig> print debugging is pretty much the same everywhere
<Lofty> Well, dbg!() has a lot of syntax sugar in it
<DaKnig> about string formatting, Rust didnt do much new; its mostly the same as in python
<DaKnig> > syntax sugar < dont we all love that
<DaKnig> :)
<whitequark> the new thing in rust is that it's a systems language with a decent string formatting facility
<agg> DaKnig: dbg! is quite different to std::cerr, for one thing it returns the value of whatever you pass in, so you can wrap expressions in it
<agg> or parts of expressions
<agg> or whole blocks of code, even
<DaKnig> how can you not love the C string formatting facilities? just allocate a big buffer, render to it via snprintf, check for errors and check you got the size right :)
<DaKnig> agg: nice
<moony> dbg! also handles printing line number, etc, so it's a good one line debugging utility
<_whitenotifier-3> [nmigen] rroohhh commented on issue #438: wrong type of buffer primitive used in series 7 - https://git.io/JUTbu
<_whitenotifier-3> [nmigen] rroohhh edited a comment on issue #438: wrong type of buffer primitive used in series 7 - https://git.io/JUTbu
<whitequark> then I can finally merge it
<DaKnig> how do I do that?
<DaKnig> I am very sorry; I forgot how to do this kinda stuff completely.
<whitequark> I think you can run `python3 -m nmigen_boards.<any ultrascale board you have support for>`
<whitequark> and check if it has the complaint in logs
<vup> whitequark: so wont adding SIM_DEVICE break vivado 2017.4 again?
<cr1901_modern> vup: Well, I installed vivado 2020, though I haven't deleted 2017.4 yet
<cr1901_modern> Would it be fair to get an informal "poll" to cap the Vivado version?
<_whitenotifier-3> [nmigen] whitequark commented on pull request #463: Add initial support for Symbiflow toolchain for Xilinx 7-series - https://git.io/JUThJ
<vup> well want to try it with vivado 2017.4? https://paste.niemo.de/raw/uhicetoyer.py
<vup> in theory one also could detect the vivado version from tcl and create version specific workarounds using that
<cr1901_modern> no
<vup> but capping the vivado version is fine for me
<cr1901_modern> vup: That's not a "no, that's a terrible idea". But rather, "no, I know that idea won't fly" :P
<vup> why?
<vup> Ah you mean won't be something accepted into nmigen?
<vup> I see that
<cr1901_modern> yes
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JUThZ
<_whitenotifier-3> [nmigen/nmigen] whitequark abaa909 - vendor.xilinx_7series: unbreak.
<_whitenotifier-3> [nmigen] whitequark commented on pull request #463: Add initial support for Symbiflow toolchain for Xilinx 7-series - https://git.io/JUThn
<whitequark> cr1901_modern: I actually don't see any major problems with it
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUThW
<_whitenotifier-3> [nmigen/nmigen] whitequark 0d776f2 - Deploying to gh-pages from @ abaa9091f45d1c5801ec7162322bbc04706e3b43 🚀
<cr1901_modern> ... I'm a bit surprised, but if you say so I retract my "no" :)
<whitequark> if the Tcl file is editing the netlist, the workaround is completely self-contained and there's no staging violation (i.e. you don't need to run Vivado while elaborating the nMigen design)
<whitequark> I mean, I'm doing the exact same thing with Yosys in back.verilog
<whitequark> arguably I'm doing a worse thing tbh
<whitequark> so I can't claim opposition to vup's idea for ideological reasons :p
<cr1901_modern> Well I have both Vivado 2020 and 2017 installed, so I'm in a position to test such a patch
<cr1901_modern> Can't remember why I didn't remove it, but I didn't
<cr1901_modern> oh wait... yes I did remove it ._. wrong dir
<moony> out of curiosity, as you all know more than me: What're the big advantages of edge-driven signals over [whatever the name is for signals that are driven when high]
<moony> they seem convenient to me in a small handful of places
<whitequark> are you asking about the comparative advantages of flip-flops and latches?
<moony> this is just proof I don't know enough terminology to ask my question
<moony> lol
<moony> one sec
<lkcl> if you have a button connected to the hardware, you only want an interrupt to be driven when it's pressed, rather than driven on every single cycle
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to xilinx-bufg [+0/-0/±2] https://git.io/JUTho
<_whitenotifier-3> [nmigen/nmigen] whitequark 200af07 - vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate.
<_whitenotifier-3> [nmigen] whitequark synchronize pull request #490: vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate - https://git.io/JUTZn
<whitequark> you *really* shouldn't connect a button to a FF clock input
<lkcl> that's a generic answer (not related to nmigen)
<DaKnig> whitequark: I dont have any ultrascale board
<whitequark> hmm
<whitequark> oh sorry
<whitequark> I misread your bug report
<moony> place I see edge-driven to be convenient is with things like my CPU's internal "advance fetch to next instruction" signal, which fires frequently and it feels more explicit to have it flip value every time it wants to advance than hold the advance signal high
<whitequark> what board are you using? can you test a blinky on that?
<moony> i was curious if that pattern had any advantage to it. It triggers on both edges, not just positive edge
<lkcl> ah ok. so this is an internal design thing.
<lkcl> so let's say you want to communicate between a (separate) fetch unit and an execute unit
<moony> not the most nmigen related question but I don't know of a better place to ask (yet) :p
<lkcl> the execute unit has a "busy" signal which is held HI until it's finished
<DaKnig> whitequark: I have an arty z7.
<DaKnig> again, sorry, how should I pull this change for it and check that it indeed works?
<lkcl> in the fetch/issue unit you would wait until that signal went LOW
<DaKnig> I have a custom board file because I use Vivado to flash the bitstream
<DaKnig> platform file* I guess
<lkcl> moony: sound reasonable so far? there, you would not use edge-triggering.
<DaKnig> whatever that's called
<whitequark> DaKnig: nevermind, vup was right: this does in fact break Vivado 2017.4
<whitequark> board file
<moony> mhm. but in, say, a full pipeline, the fetch unit would need to grab a new instruction every cycle to give decode, and decode to give to execute, etc
<lkcl> ahh okay, so here you would have your pipeline send a *pulse* (raising a "done" flag for one cycle when there's a completed result)
<moony> yea, in this case that "pulse" is the advance signal, which i've made edge-driven
<lkcl> here you *would* consider "edge-triggering" because you're looking for that rise in the signal to tell you "it's not busy any more"
<lkcl> but... butbutbut
<lkcl> it works perfectly fine but...
<moony> really more just curious if there's anything wrong with the edge-driven design beyond the fact it needs an extra wire off a LUT
<lkcl> it leaves a hole in your execution. you *must* have 2 cycles. one where the "done" is LO, one where "done" is HI
<moony> this triggers on both edges
<lkcl> which means that the pipeline can never be full. it can only ever be 50% duty
<lkcl> ok... that's unusual. perfectly valid...
* lkcl *thinks*...
<lkcl> so one's rising edge, the other's falling edge
<lkcl> and you're using the *change* in the "busy" to indicate whether the pipeline has produced a result, is that correct?
<moony> mhm
<lkcl> cool. that was the protocol used by my professor at imperial college, for the high-speed Transputer Network ASIC. *changes* indicated data.
<lkcl> sounds like it would "work" perfectly well: i can't speak for its efficiency in terms of FPGA resources
<moony> I drew out the logic, it's definitely not bad by any means
<lkcl> hypothetically you _could_ use this principle for an entirely clock-less design. you'd need to use it throughout the entire design.
<Lofty> Good luck timing it
<moony> ha, yea, timing hell
<whitequark> moony: I would be surprised if this makes any meaningful difference wrt FPGA resources
<moony> whitequark: it doesn't.
<lkcl> Lofty: my professor's serial protocol (this is 1990) was able to do asynchronous (clock-less) 140 mbits/sec over twisted pair lines. 1 pair was for transferring "0", the other pair for "1". you only changed one at a time. it worked really well.
<Lofty> That's not what I was talking about, but sure
<moony> I was thinking about experimenting a bit with clock domains in the future when I trust myself more with an FPGA
<moony> see if I can do smth neat.
<DaKnig> I forgot the name of that weird logic - maybe somebody here knows what I am talkig about- where the basic block is a "gate" whose output rises only if more than n of its inputs are HI (n can change in design time as well as the number of inputs) and falls back to 0 when all its inputs are back at 0
<Lofty> FPGAs really like synchronous designs
<moony> Lofty: Yea, I noticed.
<Lofty> DaKnig: that sounds like a majority gate, but the latter bit is weird
<whitequark> is that serial protocol actually asynchronous? that sounds similar to data/strobe encoding
<DaKnig> that's a weird type of logic that allows you to make completely clock-less designs and was proven to be less power hungry, but takes more silicon and does not work well on FPGA
<mwk> ... heh, I seem to recall something like that
<DaKnig> Lofty: its not a gate- that thing has state
<DaKnig> it keeps its high until all inputs get to 0
<lkcl> whitequark: yes it was. the diff-pairs allowed long transmission lines between cabinets. it was the ALICE project at Imperial College
<mwk> I still have a paper somewhere on async logic in fpgas that my msc advisor gave me to read, like, 8 years ago
<mwk> I think it described them
<Lofty> You can make a latch out of a multiplexer with its output connected to an input. Does that mean multiplexers aren't gates?
emeb has joined #nmigen
<lkcl> whitequark: there was good enough timing even at 140 mbits/sec (1990 era) to use the *simultaneous* transition of both diff-pairs to indicate a "control / break" state.
<Lofty> Majority logic does interest me a lot though
<DaKnig> the idea is you chain many stages of this, then it operates like this- when all outputs are valid and it gets the pulse to reset, it sends a pulse to reset the inputs and when they all go low the output goes back down and it cycles like this
<DaKnig> instead of having 0-1 boolean stuff, you have data- no data, so 0-1 takes one wire for 0, one for 1
<DaKnig> output is valid when one of those goes up
<lkcl> mwk: is it the one where genetic algorithms came up with "answers", and it turns out that the algorithm created completely unused (disconnected) cyclic blocks, which, if removed, would make the answer wrong? :)
<mwk> no
<mwk> it was about actually designing async logic
<lkcl> turns out this genetic algorithm created designs that used internal capacitance and inductive feedback *inside* the FPGA to produce the answer!
<mwk> yes yes, everyone's read that paper already
<lkcl> ah didn't know that. i loved it. really funny.
* cr1901_modern didn't read it
<DaKnig> yeah it's async logic , but I cant find any info now online
<DaKnig> I found many links some months ago
<cr1901_modern> I knew about it and its conclusions. But I didn't read it :P.
<lkcl> mwk: if you have (or recall) a link to that async paper online anywhere, some time, i'd be interested to see it
<whitequark> lkcl: anything about that project remains on the web?
<mwk> ... I have it as an actual paper
<whitequark> well, less remains and more was it ever put there
<lkcl> the ALICE Transputer project? mmm... sort-of. the ASIC was a round-robin crossbar, a joint project between PLESSEY, Imperial College and Manchester University.
<mwk> that would be it, I think
<agg> whitequark: i'm fairly sure the transputer serial links are what turned into IEEE1355 and Spacewire etc which is the data/strobe encoding
<mwk> ... ugh why is this pdf so ugly
<agg> it's "self clocking" but not really "asynchronous" I guess
<whitequark> agg: yeah that's the kinda feel i got from it too
<whitequark> but wanted to confirm
<lkcl> i never looked for the papers, though: i was just lucky enough that our lecturer was the person who designed it. 30 years now i can't remember his name
<agg> i fondly remember a vna i used to look after which had transputers inside and the bios would go through "booting transputers..." at startup
<moony> Transputer is one of those things where i'd love to play with it, but the parts are endlessly expensive
<mwk> ah, miss Alexandra has a nicer pdf
<smkz> booting cisputers,,,
<DaKnig> mwk: not this.
<moony> I like the concept, though I can see it's flaws too :p
<lkcl> whitequark: just looking at https://en.wikipedia.org/wiki/IEEE_1355 - mmm the ALICE network was with the T805 (before the T9000) so it *might* have had a chance to work its way into the T9000.
<_whitenotifier-3> [nmigen] jeanthom commented on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUksk
<whitequark> lkcl: by any chance was the link called "DS-Link"?
<_whitenotifier-3> [nmigen] jeanthom edited a comment on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUksk
* lkcl checking
<_whitenotifier-3> [nmigen] whitequark commented on pull request #490: vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate - https://git.io/JUkZ4
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±2] https://git.io/JUkZB
<_whitenotifier-3> [nmigen/nmigen] whitequark 6d98525 - vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate.
<_whitenotifier-3> [nmigen/nmigen] whitequark deleted branch xilinx-bufg
<_whitenotifier-3> [nmigen] whitequark closed pull request #490: vendor.xilinx_{7series,ultrascale}: set BUFG* SIM_DEVICE as appropriate - https://git.io/JUTZn
<_whitenotifier-3> [nmigen] whitequark closed issue #438: wrong type of buffer primitive used in series 7 - https://git.io/JJcU4
<_whitenotifier-3> [nmigen] whitequark deleted branch xilinx-bufg - https://git.io/JJJOy
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUkZR
<_whitenotifier-3> [nmigen/nmigen] whitequark 86463fc - Deploying to gh-pages from @ 6d9852506fb2880d1cca2bc2fec44c408eebb99f 🚀
<lkcl> whitequark: having difficulty finding things. will try another time.
<whitequark> sure, no problem
nelgau has joined #nmigen
nelgau has quit [Ping timeout: 246 seconds]
hitomi2507 has quit [Quit: Nettalk6 - www.ntalk.de]
nelgau has joined #nmigen
<agg> i'm using an ecp5 and input xdr=2 (so IDDRX1F), and my clock is connected to a PCLK input and is centered wrt the data, but from what I observe it's as though the input registers are capturing a half-cycle too late
<agg> could it be that internal clock routing is adding a couple ns delay before it gets to the io cell ff?
nelgau has quit [Ping timeout: 240 seconds]
<agg> the ecp5 tech note suggests that for IDDR x1 I shouldn't need to do any phase adjust because it's already centered
<daveshah> I have a suspicion that nextpnr-ecp5 isn't always routing clock inputs properly
<daveshah> If you put your lpf and a minimal design somewhere, I will have a look tomorrow
<daveshah> Do you have Diamond installed? If you can test and Diamond works, that would be good to know
<agg> I do have it installed though I've not yet used it to actually generate a bitstream, I'll give that a go now and otherwise put together a minimal design, thanks
<agg> is there any way to inspect what the routing delay might be?
nelgau has joined #nmigen
<daveshah> Not easily, because I don't think global clock delays are modelled properly
nelgau has quit [Ping timeout: 258 seconds]
<agg> haha urgh, if I set LD_LIBRARY_PATH so that pnmainc can find libtcl8.5, yosys tries to use diamond's libstdc++ which is too old a version
<whitequark> agg: yes
<whitequark> this is why NMIGEN_ENV_Diamond exists
<agg> yea, just setting that up now
<agg> I have to write a script to give it?
<whitequark> export NMIGEN_ENV_Diamond=/usr/local/diamond/3.10_x64/bin/lin64/diamond_env
<agg> oh, cool
<agg> hm, /usr/local/diamond/3.11_x64/synpbase/bin/synplify_pro: 324: /usr/local/diamond/3.11_x64/synpbase/bin/config/execute: Syntax error: "(" unexpected (expecting ";;")
<cr1901_modern> Need to run bash I think
<agg> as in, I need /bin/sh to be bash?
<whitequark> agg: iirc, that script has #!/bin/sh but uses bashisms
<agg> yea, looks that way
<whitequark> I just edited it in-place
<whitequark> there's a ton of them btw
<cr1901_modern> There is no equivalent of diamond_env on Windows. I use a custom batch file to set the path (pointed to by said env var), and it works fine. Worth mentioning in docs?
<whitequark> hmm
<whitequark> sure
<cr1901_modern> Will make PR when ready (prob later today)
<cr1901_modern> I'm a little bit too scatterbrained right now
<lkcl> oh btw, just wanted to let people know about some work by cesar[m]
<lkcl> https://git.libre-soc.org/?p=soc.git;a=blob;f=src/soc/experiment/alu_fsm.py;h=596fc3d49764081c40bf075e8098919a6f5e1979;hb=HEAD#l254
<lkcl> it's a way to easily create annotated gtkw files, using a CSS-like syntax
<_whitenotifier-3> [nmigen] whitequark commented on issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JUkCQ
<lkcl> my thoughts are, it would be nice for it to be included in upstream pygtkw
<whitequark> pyvcd?
<lkcl> oh - yes.
<agg> jeez, diamond sure has a lot to say
<agg> seems to be working after changing bin/sh in all those files
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JUkCd
<_whitenotifier-3> [nmigen/nmigen] whitequark 12beda6 - back.verilog: omit Verilog initial trigger only if Yosys adds it.
<_whitenotifier-3> [nmigen] whitequark closed issue #418: Simulation of Verilog output doesn't match nMigen simulation - https://git.io/JJGAi
<lkcl> where normally you only see a tree hierarchy on the left, his system creates a tree hierarchy in the *main* signal window, and you can set colours etc. it's... painful to have to create "meaningful" gtkw files by hand
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUkCF
<_whitenotifier-3> [nmigen/nmigen] whitequark a02b6be - Deploying to gh-pages from @ 12beda6e5b1de4a3801098f14b819e91eb63e0f3 🚀
<agg> well, the diamond bitstream does something totally different, so that's fun, time to read some of this log
<cesar[m]> I took inspiration from nmigen's write_vcd. Instead of a list of signals, it's a tree. And a separate data structure for applying style (color, numeric base, alternate display name, etc.).
<cesar[m]> Also, a sane default zoom level, on the order of the clock cycle.
nelgau has joined #nmigen
<lkcl> whitequark: my head of department, jeff magee, is still an emeritus professor at imperial! https://www.imperial.ac.uk/computing/people/academic-staff/ i have an excuse to get in touch, i'll ask how i can find the name of that protocol :)
awe00_ has joined #nmigen
awe00 has quit [Ping timeout: 258 seconds]
<daveshah> Oh, you were imperial DoC?
<daveshah> I did EIE, probably known as ISE in your time
<lkcl> daveshah: yeah. the last year i was there they opened a (completely new) Elec Eng dept. that was... 1990?
<lkcl> i'm kinda disappointed i missed it, i would really like to have done that course.
<daveshah> Yeah, it did suit a focus on FPGA stuff well
<agg> daveshah: do I need to do anything to ensure an incoming clock on a PCLK pin ends up on the clock distribution without going through logic?
<daveshah> So I had a look and it seems like it is actually broken and never has used the dedicated routing properly
<agg> I was previously just using it as a normal nmigen io but that puts it through an IB which I assume is bad, I've stopped doing that (dir=- in nmigen) but I can't see if it's doing anything to put through clock dist
<daveshah> This is on my TODO list for tomorrow
<agg> ah cool, thanks!
<daveshah> The IB shouldn't matter, and you shouldn't need to do anything.
<agg> fwiw in diamond the top 4 bits (falling-edge-clocked) all seem fine but the LS 4 bits are all 0
<agg> (sorry, this is a 4-bit parallel DDR, rgmii)
<daveshah> Interesting
<agg> no idea where the 0s are coming from but it's going through a couple of stages so maybe something else is wrong
<daveshah> I don't think it would affect a typical R[G]MII setup, but the ECP5 IDDRs do have more latency than the iCE40 ones
<agg> in yosys+nextpnr the nibbles are all shifted by one in time, e.g. instead of receiving bytes 0x01 0x23 0x45 I receive 0xX0 0x12 0x34 0x5X
<lkcl> agg: oo, you're doing RGMII? very cool!
<agg> whereas in diamond I get 0x00 0x20 0x40
<daveshah> Yeah, I think that is probably the clock routing bug
<agg> if I invert my clock in logic it all cancels out and works... :p
<lkcl> agg: do you have a repo online? we will almost certainly need RGMII for libre-soc so would be happy to support you from NLNet donations.
<agg> I will eventually, I'm not looking for financial support though
<agg> I believe litex already has rgmii too
<agg> so far rgmii is logically much simpler than rmii, anyway, you just clock in/out a whole byte at a time and get/send both control signals per clock cycle
<agg> way less faffing with dibits
<lkcl> agg, yeah - i kinda prefer nmigen, particularly being able to stay in the python simulation arena
<agg> sure, same
<lkcl> there's always cocotb but..
jeanthom has quit [Remote host closed the connection]
jeanthom has joined #nmigen
<agg> whitequark: do you know anything about diamond apparently ignoring Memory.init? synplify says "Within an initial block, only Verilog force statements and memory $readmemh/$readmemb initialization statements are recognized, and all other content is ignored" and then it says "RAM rxmem[7:0] removed due to constant propagation."
<agg> even a very minimal ROM example seems to just end up with the read port data signal being all 0s always, but works fine in trellis
<whitequark> agg: I don't, but I know how to fix your problem
<agg> that works
<whitequark> moment
<whitequark> agg: ok yeah
<whitequark> you need to grab ilang yourself
<whitequark> and then
<whitequark> ok, wait, you're using platform definitions, right?
<agg> yea
<agg> I'm using LatticeECP5Platform
<moony> well, I give up. How exactly do I install latest nmigen/nmigen_boards
<moony> doing `pip install git+https://github.com/m-labs/nmigen-boards.git`, for some reason, does not get me the latest version (it's missing ulx3s and other boards if I try that)
<agg> try github.com/nmigen/nmigen-boards.git instead of m-labs/...
<vup> moony: s/m-labs/nmigen/
<moony> agg: doesn't explain nmigen_boards.
<moony> ah
<moony> yep that was the issue
<moony> my bad
<_whitenotifier-3> [nmigen] jeanthom commented on issue #427: Add support for Assert in simulation - https://git.io/JUkRv
<whitequark> agg: okay, so, unfortunately, this is going to be annoying
<whitequark> you'll need to grab the .il file, and convert it to verilog manually, using `write_verilog -extmem`
<whitequark> for technical reasons this cannot be automated as a part of nmigen at the moment
<whitequark> but it will be at some point
<Lofty> Yay, the kind of crappy Quartus workaround strikes again for another vendor
<whitequark> lol
<Lofty> (as it turns out, thinking outputting to hex would work for Quartus was expecting too much of it, but anyway)
<whitequark> so by the way
<whitequark> do you have any sane ideas for getting -extmem to work in nmigen
<Lofty> How do you mean? Sounds like I'm missing some background info here, because evidently "just pass -extmem" is something you've already tried.
<moony> TIL: don't use `domain="comb"` on large Memorys.
<moony> ever.
<agg> whitequark: ack, thanks, that will do. As expected diamond is more annoying than trellis :p also doesn't seem to find the clock constraints as well, or at least a Clock in the platform is ignored and I have to guess the right signal to add_clock_constraint, though got there in the end
<daveshah> agg: I think I have fixed the trellis issue. Please pull trellis and its db submodule, force a nextpnr database rebuild (touch ecp5/trellis_import.py to be sure) and see if it behaves like Diamond
<agg> daveshah: thanks! I will do so in about 1h
<Lofty> moony: a valuable lesson.
<daveshah> Sure, no rush
<daveshah> if it still fails please link to the design somewhere
<agg> will do
<agg> whitequark: actually, should a Clock in the Resource cause a preference to be emitted to the sdc file in the same way add_clock_constraint does? I don't seem to get one but not sure if it's just going somewhere else or what
<moony> Lofty: I got a fun surprise trying to synthesize smth with *4k* of comb memory, assuming "oh it'll just use a block"
<moony> no, it did not
<moony> is there some way to tell the router "don't worry about the timing of this signal"? (in this case, it's an LED wire.)
<Lofty> moony: For an SDC constraint it would be a "set_false_path", but I don't know the constraint format expected there
<whitequark> Lofty: right now back.verilog just communicates on stdin and stdout. which lets it transfer exactly one file
<whitequark> -extmem just fails if you pass it
<whitequark> i don't like tempfiles though :/
<Lofty> Ah. Mmm.
<whitequark> agg: it definitely should, as long as you request a resource
<moony> Lofty: Some way to get nmigen to spit out the constraint, or do I have to do something weird for it
<daveshah> what toolchain are you using?
<Lofty> moony: ^
<moony> ECP5, so that'd be yosys + nextpnr + ecppack + openfpgaloader
<daveshah> False path constraints aren't supported
<moony> ah shame
<daveshah> The router shouldn't massively prioritise that arc timing wise, so I don't think such a constraint would be especially useful even if it was supported with the current state of things
<agg> whitequark: ah, it was because I had dir=- on the resource with the Clock
<agg> if I have dir=i it does use the Clock freq as a constraint, but not with dir=-
<agg> in both diamond and trellis (unsurprisingly since they're generated the same way)
<DaKnig> whitequark: thanks for the update :)
<whitequark> agg: hm, yeah
<whitequark> so there's actually something subtle about clock constraints
<whitequark> you sometimes have to put them in specific places in hierarchy
<whitequark> let me take a look
Yehowshua has joined #nmigen
<Yehowshua> ktemkin, been having a bit of trouble with Luna's ACM-serial valid ready interface
<Yehowshua> I've been trying to get a simple state machine going that prints a message when i send the FPGA a byte - it works, but when you first start the FPGA, the message is missing some letters
<agg> daveshah: amazing, that's fixed it, thank you!
<moony> time to figure out how to make a cache on an FPGA while being nice to said FPGA
<agg> didn't get the diamond build completely working in the end, can't figur eout why it's zeroing half the bits still, but the trellis build seems fine and also takes 2.7sec instead of diamond's 47.9sec, so...
<DaKnig> bram?
<moony> DaKnig: BRAM for the data, yes
<moony> but probably going to just have to throw LUTs at the CAM
awe00 has joined #nmigen
awe00_ has quit [Ping timeout: 256 seconds]
emeb_mac has joined #nmigen
jeanthom has quit [Ping timeout: 264 seconds]
<moony> oh wow a CAM was easy
<moony> i'm just dumb
<agg> cool, very proof-of-concept RGMII reception and transmission all working, that was surprisingly simple
<agg> i suspect the devil will be in the details but it's gratifying to see nc|pv report 100MiB/s
<_whitenotifier-3> [nmigen] whitequark edited a comment on issue #441: PS7 block not initialized on series-7 Zynq targets - https://git.io/JJcgl
lkcl_ has joined #nmigen
lkcl has quit [Ping timeout: 258 seconds]
Asu has quit [Ping timeout: 264 seconds]
Yehowshua has quit [Remote host closed the connection]
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JUk22
<_whitenotifier-3> [nmigen/nmigen] whitequark 07a3685 - back.rtlil: do not squash empty modules.
<_whitenotifier-3> [nmigen] whitequark closed issue #441: PS7 block not initialized on series-7 Zynq targets - https://git.io/JJcgn
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUk2a
<_whitenotifier-3> [nmigen/nmigen] whitequark 6c11dd5 - Deploying to gh-pages from @ 07a3685da8e77199596dcaf7aedcc46fc602ce55 🚀
<_whitenotifier-3> [nmigen] whitequark closed pull request #461: SSH Client Support via Paramiko - https://git.io/JJarq
<_whitenotifier-3> [nmigen/nmigen] whitequark pushed 1 commit to master [+0/-0/±2] https://git.io/JUk2r
<_whitenotifier-3> [nmigen/nmigen] cr1901 ef7a3bc - build.run: implement SSH remote builds using Paramiko.
<_whitenotifier-3> [nmigen] whitequark commented on pull request #461: SSH Client Support via Paramiko - https://git.io/JUk2o
<_whitenotifier-3> [nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUk2K
<_whitenotifier-3> [nmigen/nmigen] whitequark 18637ae - Deploying to gh-pages from @ ef7a3bcfb1aa349f83e4b9f69fa4e66b41f1ddd4 🚀
emeb has quit [Quit: Leaving.]
Yehowshua has joined #nmigen
<Yehowshua> SMH. Just remembered Luna has multiple domains
<Yehowshua> USB and USB_io
<Yehowshua> Glad to confirm my serial link is working now