ChanServ changed the topic of #nmigen to: nMigen hardware description language · code at https://github.com/nmigen · logs at https://freenode.irclog.whitequark.org/nmigen · IRC meetings each Monday at 1800 UTC · next meeting October 12th
<cr1901_modern> whitequark: I know, but you mentioned C code would be something to support. And I vaguely recall that needing to support C code meant using cargo for everything (incl. wrapping building the SoCs from nmigen code) wasn't possible
<cr1901_modern> Well, maybe it still is possible to use cargo :P
<whitequark> hmm
<whitequark> i don't understand the second part of your statement
<whitequark> the BSP generator is a standalone part of nmigen-soc, it doesn't care about cargo (you might not want to use cargo with rust, anyway)
<cr1901_modern> Oh, for some reason I thought we were going to gently prod ppl towards using a single build system for nmigen-soc designs, analogous to how misoc/litex provide Makefile vars.
<cr1901_modern> (But instead it would generate e.g. a cargo tree)
<whitequark> that's not the job of nmigen-soc as i see it
<whitequark> misoc/litex do way too much and this results in conflict
<whitequark> instead i would like to see litex (or perhaps an equivalent) using nmigen-soc as something to build on
* cr1901_modern nods
<cr1901_modern> "Make nmigen-soc do what you see fit and see how users build on top of it" is a valid strategy too. At least, I don't think I'll do anything too horrific :)
<whitequark> it's more that, well, maybe I'd rather see *only* Rust code on top, but then litex will never use it
<whitequark> instead of compromising I just leave anything more complex than BSP generation out of scope
<whitequark> then the scope of any potential conflict is reduced to "what does the BSP generator spit out" which isn't really controversial
<cr1901_modern> Sure; Rust-only is (should be?) fine for riscv, even for the most puny designs if you enable fat LTO :D.
<whitequark> that's how microcontroller firmware is usually built, yes
emeb_mac has quit [Quit: Leaving.]
Degi has quit [Ping timeout: 272 seconds]
Degi has joined #nmigen
electronic_eel has quit [Ping timeout: 256 seconds]
electronic_eel has joined #nmigen
PyroPeter_ has joined #nmigen
PyroPeter has quit [Ping timeout: 240 seconds]
PyroPeter_ is now known as PyroPeter
ianloic__ has joined #nmigen
ianloic_ has quit [Ping timeout: 240 seconds]
ianloic__ is now known as ianloic_
mwk has quit [Ping timeout: 240 seconds]
mwk_ has joined #nmigen
_whitelogger has joined #nmigen
PyroPeter_ has joined #nmigen
PyroPeter__ has joined #nmigen
PyroPeter has quit [Ping timeout: 260 seconds]
PyroPeter__ is now known as PyroPeter
PyroPeter_ has quit [Ping timeout: 240 seconds]
PyroPeter has quit [Ping timeout: 272 seconds]
PyroPeter has joined #nmigen
Asu has joined #nmigen
chipmuenk has joined #nmigen
mwk_ is now known as mwk
<FL4SHK> cr1901_modern: so I don't think LLVM does LTO that well
<FL4SHK> or well
<FL4SHK> it doesn't do linker relaxing
<FL4SHK> which is something you want
<FL4SHK> spitting out Binutils assembly seems better to me
FFY00 has quit [Quit: dd if=/dev/urandom of=/dev/sda]
FFY00 has joined #nmigen
<Lofty> I think you've gotten "LLVM" confused with "LLD"
<Lofty> LLVM [the compiler library] can do LTO fine (and rustc effectively does that)#
<FL4SHK> Lofty: you're right
<FL4SHK> I thought that LLD was necessary though
<Lofty> Nope
<Lofty> You can use GNU ld for LLVM
<whitequark> LLD wasn't even usable until a year or two back
<whitequark> the way LLVM does LTO is with a linker plugin
<FL4SHK> LTO needs to be done at build time
<FL4SHK> I see
<whitequark> gold more or less loads the LLVM codegen
<FL4SHK> gold isn't in use, is it?
<FL4SHK> I thought it was not being maintained
peeps[zen] has quit [Ping timeout: 256 seconds]
peepsalot has joined #nmigen
<whitequark> huh? gold is the one that's maintained
<FL4SHK> Are you sure about that?
<whitequark> binutils ld is an obsolete horror that is also incredibly slow
<whitequark> if you want your C++ link times to go down by half use gold
<whitequark> even without LTO
<FL4SHK> I recently read a thing on the Binutils mailing list
<FL4SHK> Or so I thought. It was that no one is maintaining gold.
<whitequark> hrm
<whitequark> well no one is maintaining ld either so
<FL4SHK> are you sure about that?
<whitequark> i know that no one is working to make ld a better linker
<whitequark> as evident by the horrendous link times
<whitequark> and lack of LTO support etc
<FL4SHK> ld has LTO support?
<FL4SHK> isn't that how GCC's LTO stuff works?
<lkcl> whitequark: yep. i've raised a bugreport, given them everything needed to point out the issue: absolutely nothing happened.
<lkcl> it's a major limitation as well
<whitequark> hm
<whitequark> ld does have -plugin now
<whitequark> seems like my knowledge has become outdated
<whitequark> i wonder if it's still slower, either
<FL4SHK> I'll need to make an LLVM backend at some point
<FL4SHK> maybe I'll migrate over to LLVM some time
<FL4SHK> GCC is what I *know* how to make a backend for
<lkcl> what happened was that someone decided in the late 90s to rip out Dr Stallman's algorithms "because surely 640k^W 4GB is enough for everybody"
<lkcl> now both ld and gold (its "replacement") have severe limitations when linking large programs
<lkcl> *and nobody's paying attention*
<whitequark> define "large"
<whitequark> a static LLVM build has binaries on the order of half a gigabyte
<FL4SHK> oh my
<whitequark> well it depends on how many backends you enable
<whitequark> x86 alone is something like 150M I think
<whitequark> that counts as "large" in my book
<FL4SHK> same
<whitequark> if you enable *every* backend it gets considerably larger
<whitequark> that's without debug info btw. if you enable debug info and disable split-dwarf you get obscenely large binaries
<FL4SHK> one day I won't need x86 for most tasks
<whitequark> i don't know how large because i'm not masochistic enough to do it
<FL4SHK> not going to pick ARM for it either
<FL4SHK> I'm busy building an implementation of a custom instruction set
<whitequark> it's never been clear to me that the x86 backend *has* to be that large
<whitequark> but there's an incentive to make it generate faster code, and no incentive to make it smaller
<FL4SHK> smol
<FL4SHK> Can someone remind me what I need to install to use formal verification with nMigen?
<whitequark> symbiyosys
<lkcl> if you're familiar with the problem of doing in-memory matrix multiply, linking is very similar
<whitequark> i'm familiar with linking. no idea if it's any similar to matrix multiplication
<lkcl> the page on symbiyosys contains very good instructions on the dependencies you'll need, FL4SHK.
<FL4SHK> installing stuff from the AUR
<lkcl> whitequark: in that a good algorithm is: you need to keep a partial-result row in memory going over the rows in the source, otherwise you end up with O(N^3) LOAD/STOREs
<lkcl> and it's the same with linking. and some idiot in the late 90s decided that virtual memory was an acceptable thing to use instead of resident RAM.
<FL4SHK> virtual memory?
<FL4SHK> You mean that thing that all modern OSes need?
<whitequark> what the hell are you talking about
<FL4SHK> I have no idea what you're going on about
<lkcl> the moment the program being linked goes into swap-space, it all goes to s***.
<lkcl> link time goes up literally by one to two orders of magnitude
<lkcl> because the objects to be linked, which are covered on an O(N^2) algorithm, get thrashed out of memory in competition with the program that they're being linked into
Chips4Makers has quit [Ping timeout: 240 seconds]
Chips4Makers has joined #nmigen
<FL4SHK> does nMigen let you do a slice where which indices to use are obtained from a `Signal`?
<FL4SHK> or, well, from two `Signal`s
<FL4SHK> that's what I was trying to do
<whitequark> no, because such a slice would not have a constant size
<FL4SHK> VHDL and SV let you do it so long as it's a constant size
<FL4SHK> `+:` in SV
<FL4SHK> or `-:`
<whitequark> oh, that's one signal
<whitequark> not two
<FL4SHK> how do you do it in nMigen?
<whitequark> the equivalent of +: is .bit_select(), there is also .word_select() in case you want non-overlapping parts
<FL4SHK> I want non-overlapping parts
<FL4SHK> whitequark: thanks, that's what I needed.
<FL4SHK> seems you take more inspiration from SV than VHDL
<FL4SHK> I'm mostly finding myself able to get by without `Record`, interestingly
<FL4SHK> It seems most important for when I need to do something like stick it into a FIFO
Qyriad has quit [*.net *.split]
gravejac has quit [*.net *.split]
smkz has quit [*.net *.split]
tucanae47_ has quit [*.net *.split]
gravejac has joined #nmigen
Qyriad has joined #nmigen
smkz has joined #nmigen
tucanae47_ has joined #nmigen
cesar[m] has quit [Ping timeout: 246 seconds]
cesar[m] has joined #nmigen
chipmuenk has quit [Quit: chipmuenk]
<FL4SHK> so out of curiosity
<FL4SHK> seems that `Case` values must have the same width as the `Switch`ed thing
<FL4SHK> not out of curiosity
<FL4SHK> how do I convert a `Signal` to an integer?
<FL4SHK> actually this seems to be a weakness of nMigen in general
<FL4SHK> I need to have yosys compute `y / x`
<FL4SHK> because I'm doing formal verification of a divider
<FL4SHK> whitequark: ^
<FL4SHK> this prevents me from formally verifying my divider...
<FL4SHK> going to just exhaustively test it instead
<FL4SHK> the thing is configurable
<daveshah> A large divider would probably be intractable to verify with the formal verification that Yosys supports anyway
<FL4SHK> nah
<FL4SHK> I've figured out how
<FL4SHK> oh
<FL4SHK> large one
<FL4SHK> yeah, I'm not needing to do that
<FL4SHK> I only need to test in a few different configurations
<FL4SHK> you can set the widths of the inputs and outputs
<daveshah> idk about divide, but multiplies above about 12x12 or so become very slow and 16x16 are essentially impossible
<FL4SHK> Why do you need to test large ones? Can't you just test smaller ones?
<daveshah> That doesn't test if your design breaks down for big ones
<FL4SHK> the idea is to make the algorithm you used be identical for larger widths
<FL4SHK> for 16x16, you can just exhaustively test it
<FL4SHK> I'm mainly interested in testing if my algorithm works
<FL4SHK> testing to see if I implemented my algorithm correctly
<daveshah> yeah for stuff like this exhaustive testing with something like cxxrtl is going to be much faster than formal
<daveshah> also easier to parallelise as you can test different vectors on different threads
<FL4SHK> how do I run cxxrtl?
<FL4SHK> I was going to just translate it to Verilog then use Verilator
<FL4SHK> but cxxrtl is of the same class
<tpw_rules> FL4SHK: "seems that `Case` values must have the same width as the `Switch`ed thing" this does not seem to be true?
<tpw_rules> i sure use it
<tpw_rules> but mine are just integers, not the don't care notation
<tpw_rules> or values i guess
<FL4SHK> tpw_rules: I was using the don't care notation since I was making a priority encoder
<FL4SHK> I still need to know how to use cxxsim
<vup> FL4SHK: cxxsim is a implementation of the nmigen simulator using cxxrtl as backend, it is not in master / ready yet, currently it lives in the cxxsim branch, but it has some problems: https://github.com/nmigen/nmigen/issues/439
<vup> cxxrtl is a yosys pass, its help gives you a simple example on how to use it
<vup> *yosys pass / yosys backend
<cr1901_modern> >the idea is to make the algorithm you used be identical for larger widths
<cr1901_modern> AFAIK, the SMT problems yosys generates can't handle "for all n where n is a positive integer". The best you can do is $anyconst to test all widths up to 2**const_width
<FL4SHK> cr1901_modern: so, I was referring to how you implement the DUT code, not the formal verification code
<FL4SHK> vup: guess I'd better use Verilator for now, then
<FL4SHK> that or construct an array
<FL4SHK> I could probably just construct an array to be indexed by the numerator and denominator
<vup> well cxxrtl is on the same layer as verilator in that it takes verilog and gives you c/c++ source code, so you could also use cxxrtl
<FL4SHK> since I'm only testing some small numbers I might want to just use the regular nMigen simulator
<FL4SHK> I've yet to use it
Asuu has joined #nmigen
Asu has quit [Ping timeout: 260 seconds]
alexhw_ has joined #nmigen
alexhw has quit [Ping timeout: 260 seconds]
alanvgreen has quit [Quit: Connection closed for inactivity]
Asuu has quit [Ping timeout: 265 seconds]