<_whitenotifier-3>
[nmigen/nmigen] mglb 4e208b0 - vendor: Add initial support for Symbiflow for Xilinx 7-series
<_whitenotifier-3>
[nmigen] whitequark commented on pull request #463: Add initial support for Symbiflow toolchain for Xilinx 7-series - https://git.io/JUfjX
<_whitenotifier-3>
[nmigen/nmigen] github-actions[bot] pushed 1 commit to gh-pages [+0/-0/±13] https://git.io/JUfjM
<_whitenotifier-3>
[nmigen/nmigen] whitequark eef82cd - Deploying to gh-pages from @ 4e208b0ac183fbffe7c1f1f31648fad98e419040 🚀
<DaKnig>
Im used to looking at the last line in python tracebacks
<whitequark>
the reason you get a warning before that is because i can't edit a backtrace to inject the line for the function that returned None
<DaKnig>
maybe add an empty line after this? well that's not up to me to decide
<whitequark>
it's too bad that python prints backtraces upside down
<DaKnig>
if there's an empty line this sets the warning apart from the rest of the backtrace
<whitequark>
i'm not sure this is much of an improvement
<whitequark>
should we do it to all other warnings too? many of them are going to be followed with a crash
<DaKnig>
if its all part of the same chunk of text, I skip it to the end or near the end because most of the time I dont care about most of the trace
<DaKnig>
I know others who do this too
<whitequark>
sure, my question about the other warnings stands
<miek>
but when the last line doesn't make sense, you look back up the trace to figure it out?
<whitequark>
also that
<DaKnig>
well I personally think this would help to add this to all other warnings
<DaKnig>
can I tell nmigen to not print those hint lines?
<whitequark>
which hint lines?
<DaKnig>
(*whatever*)
<DaKnig>
how's that called?
<Lofty>
strip_internal_attrs or something?
<whitequark>
in verilog? attributes
<DaKnig>
ok
<DaKnig>
Lofty: where should I add this?
<whitequark>
Lofty is correct
<whitequark>
as an argument to verilog.convert()
<DaKnig>
`...,strip_internal_attrs=1`?
<whitequark>
=True
zignig_ has joined #nmigen
zignig has quit [Ping timeout: 240 seconds]
emeb_mac has joined #nmigen
<DaKnig>
would it be sensible to have each major component in a folder of its own, with a `__init__.py` as the "top level" for that component and all the building blocks?
<DaKnig>
nah that sounds like a bad idea , nvm
awe00 has quit [Ping timeout: 246 seconds]
<lkcl_>
DaKnig: you worked it out :) if the major component becomes so large that it's best to split it into separate files, _then_ yes a good practice would be to move them into a subdirectory
<lkcl_>
at that point, __init__.py becomes like... a place to import the "thing that used to be in the file with the same name as the directory"
<lkcl_>
so that you - and anyone else using the library - doesn't have to change their code because the import location suddenly changed
<DaKnig>
should it also contain the unit test for this block?
<DaKnig>
lkcl_:
<DaKnig>
ofc this is subjective ; Im asking about your (and others') opinion
<lkcl_>
DaKnig, weell... there's a lot of different conventions that i've seen, all of them perfectly valid
<lkcl_>
one school of thought says that you the unit test at the bottom of the file containing the implementation of the module.
<lkcl_>
another says that you create a subdirectory called "test" in that same directory, and put all unit tests in there
<lkcl_>
_another_ says that you create a *top level* test subdirectory, then create a suitable directory hierarchy there (usually, but not always, mirroring the same hierarchy as the main source)
awe00 has joined #nmigen
<lkcl_>
bottom line is: it's really entirely up to you to choose.
<lkcl_>
nosetests3 (and things like it) will hunt throughout the entire source code and run anything that pattern-matches "test"
<DaKnig>
I see
<DaKnig>
thanks
<whitequark>
if you *do* put a test directory into your main package directory, make sure to exclude it in setup.py
<whitequark>
or else you might discover someone depending on your test helpers
<lkcl_>
oh so you mean, make sure that the packaged (distributed) version excludes any test files?
<lkcl_>
i can see the logic behind that
<whitequark>
yes
feldim2425_ has joined #nmigen
feldim2425 has quit [Ping timeout: 256 seconds]
<awygle>
whitequark: rereading that comment it sounds maybe shittier than I intended, I just meant "this wasn't part of your design but people clearly want it so let's give them something that is", but "you clearly don't like FHDLTestCase" was the wrong way to say it
<whitequark>
awygle: i didn't find the comment offensive or anything
<moony>
question about FSMs: Will setting the mode in some place before the `with m.FSM()` part affect the current cycle's mode? (I want it to change immediately, not next cycle)
<whitequark>
what's mode?
<moony>
m.mode
<whitequark>
m.mode?
<whitequark>
what are you talking about?
<moony>
FSMs.\
<lkcl_>
moony: do you mean m.next?
<moony>
gdi
<moony>
did it again, yes i do
<whitequark>
oh. m.next is an error outside of an FSM
<moony>
mmm.
<moony>
means I can't use an FSM in decode, hm
<lkcl_>
well... you caaannn... it's complicated, let me find an example
<lkcl_>
or, you can do what i think you want, is what i mean
<lkcl_>
so basically, just to check: in a particular FSM state, you want to *combinatorially* set a signal, right? that's what you mean by "right now"?
<moony>
mmm, hm. No, i wanted to force set the FSM back to some state if a signal is set
<moony>
and have it take effect immediately, so, say, if the signal is held the FSM is stuck in that stae
<lkcl_>
ok, yes, you can do that... but the change of the FSM back to some state only happens on the next cycle
<lkcl_>
*but*
<whitequark>
is it basically a reset?
<moony>
whitequark: Yes, it's p much a reset.
<lkcl_>
what you can do is: inside that block you can put a "m.If(something)" around the stuff that would otherwise happen
<awygle>
lkcl_: let's get clear on what moony wants before trying to fix it, eh?
<whitequark>
thanks awygle
<whitequark>
moony: one thing you can do is to put an FSM in a submodule (not necessarily a different Elaboratable, you can just have two Module()s in a single elaborate() function)
<whitequark>
and then use ResetInserter
<whitequark>
depending on your exact needs, this might or might not be elegant
<lkcl_>
awygle: good idea. if he wants to do a reset, that's a different technique
<awygle>
... Two modules in one Elaboratable just blew my mind lol
<awygle>
whitequark: is m.next legal inside FSM but outside State?
<whitequark>
nope
<whitequark>
i mean that's the exact same issue
<awygle>
thought so but wanted to confirm yeah
<lkcl_>
eh worra... ResetInserter?
* lkcl_
goes to look that up
* awygle
mumbles something about statecharts, looks at todo list, gets meeting reminder, shuts up
* lkcl_
looking for an example that uses ResetInserter
<whitequark>
examples/basic/ctr_en.py shows how to use EnableInserter
<whitequark>
ResetInserter is basically the same, except it adds a reset
<Degi>
self.input and self.output are signals and self.polynomial is a python itn
<Degi>
*int
<DaKnig>
what's polynomial
<DaKnig>
ah
<Degi>
Ah
<Degi>
Forgot a cat on in_vals
<Degi>
Hm okay, the simulation hangs huh
<whitequark>
Degi: to answer your original question, there is, because nmigen recursively processes your code
<DaKnig>
you wanna use intermediate sigs
<DaKnig>
to avoid huge trees
<whitequark>
you can increase it greatly though
<Degi>
Hm I guess the limit is the python recursion limit?
<whitequark>
yep
<DaKnig>
whitequark: isnt that a limitation of your python interpreter+system?
<Degi>
So intermediate signals could speed up simulation?
<DaKnig>
and not nmigen's?
<DaKnig>
Degi: no, I mean it would not have to move around this whole huge thing
<DaKnig>
and might ease debugging
<DaKnig>
I cant read this sexpr at all
<DaKnig>
it's huge
<DaKnig>
unless you are *sure* it's the right thing to do
<Degi>
Hm its supposed to make a CRC and I couldn't find how to do it with 16 bits at once, so I thought that nMigen can just figure that out for me, by doing 1 bit 16 times and then .eq'ing it
<whitequark>
Degi: it's not going to, in general, speed up simulation
<DaKnig>
you can do this in any lang, sure
<DaKnig>
in VHDL too
<whitequark>
raising the recursion limit would just avoid a crash
<DaKnig>
teh question is, would it be readable
<whitequark>
as for it being a limitation of python: yes and no
<Degi>
Actually it just hung up my whole PC for a while and then the CPU wait thingie was like a hundred something and zsh terminated it...
<whitequark>
yes it is a limitation in python, but heavily nested expressions tend to produce heavily nested verilog
<whitequark>
and *that* is something verilog frontends are not actually obligated to accept
<whitequark>
just like c doesn't let you nest arbitrarily many levels of {}s for example
<DaKnig>
I was actually thinking about that
<DaKnig>
maybe I did not express it correctly
<whitequark>
python has a different, dumber limitation
<whitequark>
you're limited in the number of elif clauses
<whitequark>
(because the AST for if/elif/elif... is a linked list)
<whitequark>
(well, it's not a linked list, it's an extremely unbalanced tree that looks like a linked list)
<DaKnig>
(about C, the standard says that unless there is a good reason to, the implementation should not limit the user in that respect, and indeed some compilers dont)
<DaKnig>
huh interesting
<DaKnig>
that is indeed a dumb limitation
<DaKnig>
you could with enough RAM and compute solve this
<whitequark>
that's what #359 is about
<DaKnig>
why does this affect nmigen specifically though
<DaKnig>
you dont produce python code from this input source, do you
<whitequark>
pysim is an AOT compiler
<whitequark>
targeting python
<DaKnig>
I see
<DaKnig>
why not some faster lang if its already doing this?
<DaKnig>
portability?
<whitequark>
that's called cxxsim
<whitequark>
it's not merged yet
<DaKnig>
(C on windows... dont even wanna think about that)
<DaKnig>
ah I see
<whitequark>
yeah that's why pysim is still fairly heavily optimized
<whitequark>
it used to be ~6x slower before i rewrote it as an AOT compiler
<whitequark>
instead of an AST interpreter
<whitequark>
that's on cpython, i think the gains on pypy are even higher
<whitequark>
lkcl would probably know the numbers
<DaKnig>
can I use sv backend to produce arrays instead of long switch statements for Memory?
<whitequark>
uh, you shouldn't be getting long switch statements for Memory
<Degi>
Hm so instead of using "python xyz" I can use "pypy xyz" and it goes fasteR?
<whitequark>
should just be a ROM, even in verilog-1995
<whitequark>
Degi: no
<whitequark>
it goes faster *after warmup*
<Degi>
hmh awwh
<whitequark>
unless you simulate a lot of cycles it'll likely be slower
<Degi>
Ssame for cpython?
<Degi>
Ah wait is that the default interpreter...
<whitequark>
yes
<whitequark>
well, it's an optimizing bytecode compiler, technically speaking
<whitequark>
i actually do benefit from the optimizations!
<whitequark>
pysim is written with them in mind, though it doesn't affect things *too* much
FFY00 has quit [Remote host closed the connection]
<whitequark>
(here's a joke. "what's the difference between a compiler and an interpreter? mostly social status")
<DaKnig>
come on, CPython bytecode is not even optimized :)
<Degi>
Hmm... *executes python code on an FPGA*
<Degi>
Hmm can we run pysim on FPGAs? And maybe even provide external inputs? xD
<whitequark>
DaKnig: it is
<DaKnig>
there's a project for speeding up python with FPGA
<whitequark>
cpython has a peephole optimizer, and there are some type-based optimizations for constants as well
<DaKnig>
Degi: you can always synth the code, you know :)
<whitequark>
oh and an ast-based constant folding pass. pysim puts constants on lhs so it can benefit from that
<whitequark>
because it generates a bunch of 0 + expr
<DaKnig>
whitequark: nothing for inferring or using types of args and jump tables to the most likely methods, or using hints to speed up the process, right?
<whitequark>
nope
<whitequark>
not that i know anyway
<DaKnig>
well
<whitequark>
oh
<DaKnig>
that's what I mean by no optimization
<DaKnig>
constant folding is a pretty small thing I think, but well what do I know
<whitequark>
it is an optimization
FFY00 has joined #nmigen
<whitequark>
"cpython doesn't optimize enough" would be accurate
<whitequark>
for example, it could *definitely* use inline caches
<DaKnig>
yes that's what I mean
<whitequark>
"it doesn't optimize" is just incorrect though
<DaKnig>
it could make a compiled version that uses the hints for the argument types for example
<whitequark>
it does and it gives pysim a small but measurable boost
<whitequark>
oh, i'm pretty sure this was done by third parties
<whitequark>
cruby actually has that upstream
<whitequark>
frankly i find the approach disgusting
<DaKnig>
I am sure taht with enough time this should not be too complex
<DaKnig>
why disgusting
<DaKnig>
optimizing for common use cases is a good idea
<DaKnig>
no?
<whitequark>
wait, what do you mean by "compiled version" exactly?
<DaKnig>
bytecode
<whitequark>
it, uh, does that?
<whitequark>
i was literally explaining how it works a few minutes ago
<DaKnig>
it generates a version that is optimized for the types of args you hint?
<whitequark>
no, since it can't rely on that
<d1b2>
<TiltMeSenpai> I think python also just uses python for its type hints
<whitequark>
it does optimize for the type hints it can rely on though
<d1b2>
<TiltMeSenpai> it's super weird
<whitequark>
e.g. if you do 1 + something, it'll use a bytecode that bypasses method lookup on int
<whitequark>
*for the type information it can rely on
<DaKnig>
you can check in runtime and under certain conditions blah blah... I think C++ does this, you can make it generate code specific to some input types of functions even if they are templated to speed up execution for specific input types
<DaKnig>
well ok if you say that...
<whitequark>
when it translates a module it doesn't know if `int` is actually int or if it's just going to be something different entirely
<whitequark>
so your type hint is pretty useless to the bytecode compiler
<DaKnig>
ok.
<DaKnig>
I have a question about how I should read the schematic for my board to use the pmod module I got.
<whitequark>
given that the best it could do even if it could use the type hint is to add a fast path for that type, you'll get a lot more benefit from inline caches
<awygle>
i'm a bit confused by the comment about CDC issues. the buffered fifo shouldn't introduce any new CDC issues that aren't present in the unbuffered one, so far as i can see
<whitequark>
yeah, i don't understand it either
<whitequark>
anuejn is on irc but under a different nick i don't recall
<awygle>
they were going by anuejn when they pinged me about this originally
<whitequark>
ah
<awygle>
and someone by that name is in this channel
<awygle>
there's also anu3jn
<whitequark>
oh, i misremember then
<whitequark>
unless anuejn is here and can explain the CDC issue, i think we should look at the next issue
<awygle>
sure. i plan on working on 485 after work today, i got it all cloned and whatnot last night but uh... i needed sleep lol
<whitequark>
alright, no problem then
<lkcl_>
is everyone basically happy with sign-extend/truncate, then? that's going ahead?
<awygle>
good point. i wrote up the new RFC and i don't think there's been much feedback? lemme check...
<whitequark>
hm, i don't think we had a consensus yet
<awygle>
although let'
<awygle>
s do jfng's thing first since they went in order :p
<whitequark>
yup, sure
<lkcl_>
ack, yes. just raising it because it wasn't labelled
<jfng>
today, resources can be any arbitrary object
<whitequark>
can you make it an error when adding a resource?
<jfng>
so they may or may not have a name attribute
<jfng>
i would like to, yes
<whitequark>
oh, i see
<whitequark>
the problem is that resources don't necessarily have names at all
<jfng>
for csr.Elements, it's ok since they are Values
<jfng>
yeah
<whitequark>
the issue is misleading
<awygle>
or incomplete, at least
<jfng>
so my first question is, is this a problem ?
<jfng>
or can we just provide a `name` parameter to MemoryMap.add_resource() ?
<whitequark>
hm
<whitequark>
let me recall how that design for memory maps works... or was supposed to work
<whitequark>
jfng: yes, absolutely, that is a great solution
<whitequark>
the other good thing about it is that it'll let us get rid of the __hash__ stuf
<jfng>
so we would index resources by name instead of by hash ?
<whitequark>
yeah. i'm not sure why i went with the design that's currently in the repo, it's kind of dumb
<whitequark>
i index map._resources a few times, but that isn't even actually correct
<whitequark>
what i intended to do is do an identity check
<whitequark>
but there's no actual guarantee __hash__ stays the same
<awygle>
mhm, i originally thought this was what DUID was for when i started poking around (tho it's obviousl ynot)
<jfng>
oh, i didn't know hash could change
<whitequark>
well
<whitequark>
it's not supposed to
<whitequark>
you'd be violating a Python contract
<whitequark>
since we require it to be implemented for no good reason, people might write faulty implementations that do violate it
<whitequark>
python can't enforce this contract for good reasons, but we don't even have to rely on it
<awygle>
so even though in general we don't guard against malicious python (because we can't), in this case we're increasing the surface area for problems to no real benefit?
<whitequark>
just do id(resource)
<whitequark>
awygle: precisely
<whitequark>
i mean, that's a lesser concern. a greater concern is just that this requirement is obnoxious
<whitequark>
there's even a FIXME in nmigen-soc itself
<whitequark>
multiple by this point
<jfng>
ok, so let's add a name argument to add_resources()
<whitequark>
hm, let's step back for a moment though
<whitequark>
oh, nevermind
<jfng>
another question i have is about window names
<whitequark>
yes, i was about to ask about that
<jfng>
can we make them optional ?
<jfng>
if not provided, then all resources would fall into the "parent" namespace
<whitequark>
windows currently don't have names, do they?
<whitequark>
but sure
<jfng>
no, they don't
<whitequark>
absolutely no objection to making it optional
<whitequark>
that's probably what i would do here
<jfng>
the main use case i see, is when you have a lot of bus adapters
<jfng>
they should be implementation details, and therefore invisible
<whitequark>
yup
<jfng>
the last question i have is about MemoryMap.all_resources()
<jfng>
we agree that it will have to include resource names to its yield values ?
<whitequark>
yup. also, i'm pretty sure both that and find_resource should graduate to returning a namedtuple
<whitequark>
or just a data class
<whitequark>
namedtuple is probably fine here
<awygle>
remember you're gonna get all the tuple operator implementations
<whitequark>
mh, you're right
<awygle>
which may or may not make sense
<whitequark>
let's make it a data class that can be destructured
<jfng>
then just a data class yeah
<jfng>
so then these would have a name attribute, which would actually be an iterable with all the parent window names + the actual resource name
<whitequark>
yup
<whitequark>
(*that* should be a tuple...)
<jfng>
i like this approach
<jfng>
alright i think i have everything i need to proceed :)
<DaKnig>
sorry to butt in; are there any news about approaching the generic PLL problem?
<whitequark>
cr1901_modern: i thought you did mkdirs(root) instead of mkdir_exist_ok(root)
<cr1901_modern>
I don't see how that resolves the issues of passing a path like foo/../bar or foo/../../bar though
<cr1901_modern>
in other words, seems like you found a genuine oversight in my code and I don't understand why you changed your mind
<whitequark>
oh, the problem only happens if you try to interpret the path locally
<whitequark>
because path.parents doesn't work the same way as doing the same operation on the remote
jeanthom has joined #nmigen
<cr1901_modern>
If I'm iterating over the parents of "foo/../../bar", I would expect the list ["bar", "..", "..", "foo"] since it's lexographic order
<cr1901_modern>
mkdir("..") on the remote should fail. Idk if it'll fail b/c directory already exists, or another error
<whitequark>
it won't
<whitequark>
because the current directory isn't a thing in sftp
<DaKnig>
how can I set the clock domain of an elaboratable (or a module) after instantiation? or should I pass a clock domain to __init__() ? (that sounds ugly)
<whitequark>
if it was a thing, it would likely not fail because the remote can interpret .. just fine, but in this case, paramiko can interpret .. just fine
<cr1901_modern>
Ahhh hrm... well, okay. I basically do what you requested, except I attempt to mkdir and silently continue if it already exists (rather than change to said dir and create it if it doesn't).
<cr1901_modern>
(and a root can either be relative to home dir or absolute)
<whitequark>
thm
<whitequark>
there's no PurePosixPath.resolve anyway
<cr1901_modern>
I think "assert none of the paths contain more ".." components than other components" should work? if there's an equal number, that means we're in the root dir. One more means one above the root dir, etc
<whitequark>
we can just prohibit ".." in BuildPlan
<cr1901_modern>
I also think that's reasonable; I can't of any good reason why you'd want it. But I thought maybe you did.
feldim2425 has quit [Quit: ZNC 1.8.x-git-91-b00cc309 - https://znc.in]
feldim2425 has joined #nmigen
<whitequark>
let's do that then
* cr1901_modern
nods... good meeting :P
<DaKnig>
lkcl_: I still dont get it.
<DaKnig>
what does it do there?
<DaKnig>
why do you call it like that?
<whitequark>
Yehowshua: kaze is weird, it doesn't allow signals larger than 128 bit
<whitequark>
that seems... impractical
jeanthom has quit [Ping timeout: 240 seconds]
<Yehowshua>
whitequark: are you trying to be funny?
<whitequark>
Yehowshua: no?
<Yehowshua>
Oh. Well, I mean you may very well have a machine that has an adress space of 2^128
<whitequark>
what does this have to do with address spaces?
<Yehowshua>
Well I can't really think of any signals longer than 128 bits haha
<daveshah>
Among many other uses 256+ bit data paths are very common
<DaKnig>
Yehowshua x86 has 512bit vector registers
<whitequark>
^
<Yehowshua>
I stand corrected
<Yehowshua>
You think kaze is weird, check out js-hdl
<whitequark>
also, even if you only ever use 128-bit signals, a 128-bit rotate is expressed as a part select from a 256-bit intermediate
<whitequark>
well, it doesn't have to be expressed that way, but it's the obvious one
<whitequark>
yeah there's tons of uses for really wide signals
<d1b2>
<TiltMeSenpai> but I have Signals easily larger than 256 bits wide
<d1b2>
<TiltMeSenpai> normally as sample buffers
<whitequark>
i can see using signals that are thousands of bits wide
<whitequark>
for example, maybe your JTAG BSCAN register is one signal
<DaKnig>
whitequark: wouldnt the obvious one be Cat(sig[1:],sig[0])?
<DaKnig>
:)
<whitequark>
DaKnig: that's a const rotate
<d1b2>
<TiltMeSenpai> yeah if I want a 16 bit adc that's a 65536 bit wide signal or something
<whitequark>
i'm talking about a variable rotate
<DaKnig>
ah
<DaKnig>
change 1,0 to i,i-1 :)
<whitequark>
if i is a Value then that's a syntax error
<whitequark>
try it
<DaKnig>
ah right; that weirdness.
<whitequark>
it's not weird
<whitequark>
in general, if you do s[i:j], then the size of that value is j-i
<DaKnig>
ok
<whitequark>
nmigen requires all values to have a constant fixed width because it can't convert your design to a netlist otherwise
<whitequark>
you could argue that i-1-i is a constant, and that would be true
<DaKnig>
yet we have non homogenous arrays :)
<whitequark>
it would certainly be technically possible to support that and similar simple cases
<whitequark>
hm
<DaKnig>
yeah I see why this is a problem
<whitequark>
non-homogeneous arrays are the same kind of thing as a non-homogeneous addition
<lkcl_>
DaKnig: apologies i know you're speaking with wq, i have a meeting to prepare for in 10m. so you have a submodule (in this case core) that you want a completely different clock domain for. however you don't want to have to rename the clock in every single module and submodule, so you use DomainRenamer.
<whitequark>
or actually non-homogeneous mux would be a better example
<whitequark>
since an array is basically the same thing as a mux
<whitequark>
conceptually
jeanthom has joined #nmigen
<whitequark>
i agree that a language which requires arrays to be homogeneous would make sense, but then it should require operand size to match for muxes and assignments, too
<lkcl_>
this now creates an extra sync domain named "coresync". next step: you have to connect that to something. this is done at line 109. i chose to make coresync.clk *equal* to sync.clk.
<DaKnig>
lkcl_: so you rename the `sync` domain (by default) of that module to something else (which is internally still connected to the sync domain of submodules), then make a domain with that name and connect its clock to a wire (from pll/pin/w.e)... did I get that right?
<whitequark>
sounds right to me
<lkcl_>
DaKnig: yes.
<lkcl_>
remember also to connect up the reset signal.
<DaKnig>
whitequark: please dont take offense ofmy different view on the matter or if I say it in an offesnive way; can you give me one good useful example where you would want a mux with variable width for its (non control) inputs?
<lkcl_>
otherwise when you convert to ilang / verilog, you will end up with a signal "coresync_rst" that is not connected to anything
<DaKnig>
that sounds totally pointless besides some tricks
<whitequark>
DaKnig: that's a perfectly reasonable question
<DaKnig>
lkcl_: doesnt it automatically connect to the reset signal of the board?
<whitequark>
and it's easy to give an example, anyway: Mux(s, anything, 0)
<whitequark>
wait
<whitequark>
by "variable" you meant "different", right?
<DaKnig>
0 is extended to said fixed size
<whitequark>
yeah
<DaKnig>
so this is a bad argument
<lkcl_>
DaKnig: no because you renamed it. DomainRenamer() renames *both* the clock *and* the reset signal
<DaKnig>
fixed width arrays allow me to use sv arrays , not switch statements on the sv level
<whitequark>
nmigen arrays could be translated to sv arrays, i think
<whitequark>
at least, i don't see why not
<whitequark>
the only reason they aren't is because it's not expressible in RTLIL
<DaKnig>
well if you can know the width of the longest variable, yes
<DaKnig>
RTLIL? you mean ilang?
<whitequark>
(width of the longest variable) nmigen knows the width of every array element, yes
<whitequark>
it wouldn't be able to translate indexing into an array otherwise
<whitequark>
since you might do something like `Signal(len(arr[idx]))`
<whitequark>
(ilang) those are the same thing. RTLIL is the proper name of the language, ilang is just how Yosys calls it in the commands
<whitequark>
which I think is weird and confusing
<whitequark>
as this conversation shows
<DaKnig>
RTLIL is a standard thing outside Yosys?
<whitequark>
no
<DaKnig>
ok
<whitequark>
it's entirely defined by Yosys' implementation
<DaKnig>
then it makes even less sense
<whitequark>
which is actually unfortunate
<whitequark>
well
<whitequark>
yeah
* whitequark
shrugs
<DaKnig>
you have a thing and you call it different names even if you are the only one using it
<whitequark>
I've no idea why it's like that
<awygle>
i think ilang is supposed to be the text representation of rtlil
<whitequark>
let's rename write_verilog to write_verilog_text then, because it doesn't dump an AST
<whitequark>
i mean, maybe, but that doesn't make sense to me still
<DaKnig>
lol a lang without a text representation
<whitequark>
there are more than a few, actually
<DaKnig>
I am sure people would make weird things
<awygle>
it's like if you named llvm's ir one way in the codebase and one way for the textual representation :shrug: i agree it does seem kinda dumb. i'm not arguing for it
<whitequark>
mit scratch? labview?
<awygle>
it might make more sense if it was RTL Intermediate Representation instead of Intermediate Language
<whitequark>
smalltalk to some extent
<DaKnig>
DomainRenamer is a module?
<whitequark>
nope, it's a wrapper, more or less
<DaKnig>
then I still dont get how it works
<DaKnig>
do you put an instantiated module inside it?
<DaKnig>
and it just renames and rewires the domains?
<whitequark>
more or less yeah
Yehowshua has quit [Ping timeout: 245 seconds]
<cr1901_modern>
> also, even if you only ever use 128-bit signals, a 128-bit rotate is expressed as a part select from a 256-bit intermediate
<cr1901_modern>
A bit late, but... what do you mean by this?
<DaKnig>
its in the manual
<cr1901_modern>
what... the x86_64 manuals?
<whitequark>
cr1901_modern: suppose you have a signal "abcdefgh". you want to express a rotate but all you have is a mux. write it down as "abcdefghabcdefgh". now consider what happens if you select exactly 8 characters at any position
<DaKnig>
or I thought it was; basically if you want to shift `x` by `s` which is a signal in range(0,len(x)) you can `Cat(x,x).word_select(x,len(x))` or something like that
<cr1901_modern>
it's a rotate :o
<DaKnig>
rotate*
<DaKnig>
oops
<DaKnig>
welp I failed again
<DaKnig>
please ignore my last big message as it is completely wrong.
awe003 has joined #nmigen
awe002 has quit [Ping timeout: 258 seconds]
Yehowshua has joined #nmigen
<Yehowshua>
One thing I keep wondering to myself is "is nMigen actually faster than verilog"
<Yehowshua>
As in faster to write?
<DaKnig>
you can use python to generate hardware more freely
<whitequark>
i don't know, i can't bring myself to write any significant amount of code in verilog
<DaKnig>
than the VHDL/verilog generate statements
<DaKnig>
problem with verilog is its nasty
<Yehowshua>
I've worked on some fairly complex logic in nMigen
<DaKnig>
one cool thing you can much easier do in python is you can interface with files and user input, decide what to do and generate hardware accordingly
<Yehowshua>
Whenever I hit a snag, my first instinct is to blame the language
<DaKnig>
I make use of the python stuff when I have a complex function; I can just embed it as a LUT.
<DaKnig>
maybe not the best for timing or resource usage, but hey, much simpler than expressing it in Verilog
<Yehowshua>
Yeah, that seems a bit unusual
<Yehowshua>
Expressing logic tabularly
<DaKnig>
instead of writing code, you write code that writes code
<DaKnig>
is hwo I see tis
<DaKnig>
this*
<whitequark>
Yehowshua: Verilog has a dedicated feature for that
nelgau has joined #nmigen
<Yehowshua>
Oh?
<Yehowshua>
Well, the reason I mention this at all is when describing nMigen to a verilog veteran, it's hard for me to communicate why I like nMigen so much
<Yehowshua>
Perhaps its not i particularly like nMigen, but rather i detest Verilog and VHDL far more
<DaKnig>
tell em it integrates the work from writing hardware to running tests and flashing to FPGA all without touching tcl
<whitequark>
i mean, i don't like Python all that much
<DaKnig>
you map ports in your source code; and test it. imagine that!
<Yehowshua>
Yeah I tried.
<Yehowshua>
I think its more that you don't know until you try
<whitequark>
i wouldn't normally pick Python as a language to embed something in; it just has massive network effects
<Yehowshua>
Like once you use it, then you just can't go back
<Yehowshua>
whitequark, admittedly Python is pretty trendy
<awygle>
honestly the primary appeal for me is that it's well thought through from beginning to end. it's been designed as a coherent whole for HDL design, unlike verilog which has accumulated stuff over the years
<DaKnig>
just knowing I dont have to use all the GUI in vivado; thats enough
<awygle>
and this is coming from a die-hard python hater, so
<Yehowshua>
Agreed. Vivado gives me nightmares
<Yehowshua>
I can't tell you how glad I am that XRAY finally works
<DaKnig>
vivado is slow and eats up my RAM
<DaKnig>
for no good reason
<Yehowshua>
I'd literally pay for PrjXRAY if I could
<DaKnig>
even when idling
<awygle>
i really want to work on nmigen-stdio :(
<DaKnig>
and I cant close it coz start times are horrible
<awygle>
Soon(TM)
<DaKnig>
awygle: what s that
<Yehowshua>
Do I smell a valve ref?
<Yehowshua>
Valve will bring back the companion cube... Soon(TM)
<awygle>
DaKnig: it's kind of between the nmigen stdlib and nmigen-soc. it's intended to be a repository of useful IP cores without all the CPU glue (which lives in SoC)
<whitequark>
am i weird for actually liking vivado a bit
<awygle>
currently it's uh... A Bit Bare-Bones let's say
<whitequark>
i think everyone i talk to who uses vivado has nearly infinite resentment for it
<Yehowshua>
whitequark, you're to good for vivado to hate you
<awygle>
but it's the thing i most want to work on in the nmigen ecosystem (narrowly edging out simulator stuff)
<whitequark>
vivado is ... well, it's not as bad as ise, and not as bad as diamond, and not as bad as quartus, and the tcl stuff is actually decently documented
<Yehowshua>
**too
<DaKnig>
whitequark: slow, big, eclipse. what can you say to defend those three?
<Yehowshua>
Well, the first time I used vivado, it crashed my computer
<whitequark>
and not as bad as icecube, and not as bad as wincupl, and not as bad as , well you get the idea
<awygle>
i prefer diamond to vivado, as a user. but your perspective is probably "how do i integrate this into nmigen"-focused instead of "i have to sit here and write verilog all day"-focused
<Yehowshua>
The second, it actually bricked my XiLinx certified FPGA
<whitequark>
DaKnig: i don't use the vivado gui
<awygle>
yup that's what i figured lol
<whitequark>
i don't use the ise gui either
<whitequark>
which is probably why i'm still under the impression that ise works
<whitequark>
i mean
<Yehowshua>
The diamond GUI has actually been a good XP from my POV
<whitequark>
for me, works means synthesizes the design. reportedly you can no longer run the linux version of ise because the libs it depends on no longer exist
<Yehowshua>
It's stable for one.
<awygle>
diamond GUI is pretty good except for all the buggy bits (looking at you Clarity)
<whitequark>
i don't think i ever opened the diamond gui
<tpw_rules>
whitequark: that is not true
<Yehowshua>
Yasss... Clarity is garbage
<awygle>
and Revealer
<whitequark>
tpw_rules: i remember digging into that claim and finding out that it's like partially true
<awygle>
not only is clarity garbage but so are most of the lattice ip cores :/
<Yehowshua>
QuickLogic took a hint and *forgot* to make a GUI
<whitequark>
maybe the ide worked but the floorplanner didn't? or the programmer? something like that
<whitequark>
Yehowshua: i haven't managed to install the quicklogic toolchain still
<whitequark>
it uses conda
<tpw_rules>
whitequark: i synthesized with the command line tools several times last night on ubuntu 18.04
<Yehowshua>
Oh god... Why?
<whitequark>
tpw_rules: argh, i made a typo
<whitequark>
tpw_rules: i meant the GUI of the linux version of ise
<whitequark>
the CLI still works fine, i test it from time to time with nmigen
<tpw_rules>
that seems to work a bit too. i've opened it and poked at settings a couple times
<whitequark>
Yehowshua: exactly
<whitequark>
awygle: does *any* FPGA vendor know how to write HDL
<whitequark>
i've seen things inside Xilinx IP that you wouldn't believe
<whitequark>
i mean you probably would
<DaKnig>
tbh Vivado does show the problematic path p well
<DaKnig>
in its reports and all this
<Yehowshua>
Jonathan Balkiner said something about an open FPGA on Skywater
<Yehowshua>
There will only be 300 or so
<awygle>
whitequark: the weird thing is that the lattice engineers i've worked with were very sharp
<awygle>
so somehow they just aren't the ones writing the IP cores
<whitequark>
awygle: yes, exactly
<awygle>
so it's not that lattice doesn't know how to write hdl so much as lattice doesn't know how to leverage their talents, i suppose
<whitequark>
i'm sure FPGA companies are full of smart engineers who can write good HDL because they ship working FPGAs
<Yehowshua>
awygle, which IP cores specifcially?
<awygle>
Yehowshua: basically anything that touches the SERDES has been incredibly flakey
<whitequark>
well, mostly working
<awygle>
like, we couldn't use their built-in 8b10b support iirc?
<whitequark>
xilinx SERDES seems especially buggy for some reason
<awygle>
because the soft logic in the core was bad
<whitequark>
i haven't heard a single good thing about it
<DaKnig>
2it has builtin 8b10b TMDS encoding thing?!
<Yehowshua>
Yeah, my friend Michael complained a lot about shipping his own 8b10b on the ECP5
<whitequark>
Yehowshua: huh really? what were the bugs?
<whitequark>
I used hard 8b10b and it seemed to work fine
<Yehowshua>
Let me ask. Keep in mind, this was two years ago
<whitequark>
that kind of bug usually makes you remember it...
<awygle>
iirc the problems weren't in the hard core it was in the soft logic lattice shipped around it
<whitequark>
oh
<awygle>
for us at least
<awygle>
when were you using them, yumewatari?
<whitequark>
... you know, that's not the reason i never use vendor soft IP, but it still works out great
<whitequark>
yep
<awygle>
another super-cool project i'd love to work on
<awygle>
brb cloning myself eleven times
<Yehowshua>
I wish my life was a git repo SMH
<awygle>
git rebase -i twenty-five
<whitequark>
awygle: Degi is working on it now
* awygle
is turning 30 in january and is not particularly sanguine about it yet
<awygle>
yeah i know, i followed them on twitter for updates lol
<whitequark>
I'm pretty happy about that, I have plenty of things to do
<awygle>
at this rate it will be done almost exactly when i will have time to use it, so i'm fairly happy too
<Yehowshua>
awygle, I'm pretty sure you need a `git reset --hard`
<Yehowshua>
and then `git clean -fxd`
<DaKnig>
what's ClkSignal?
<Yehowshua>
At least I do
<awygle>
ClockSignal() is the clock signal of a domain
<awygle>
i think it's mostly accessible so you can pass it to Instance, uh, instances
<whitequark>
no
<DaKnig>
wot
<DaKnig>
that sounds weird
<awygle>
i lose :(
<whitequark>
i mean, it's useful for that, but that's not why it exists
<whitequark>
ClockSignal/ResetSignal are late bound
<whitequark>
you can use them while the domain doesn't even exist yet
<DaKnig>
I am confusion
<whitequark>
i need to leave for a few minutes, after that i can explain if no one else does
<Yehowshua>
I use ClockSignal() in `instance` sometimes - for attaching to pins
<Yehowshua>
Like if I have some pll
<DaKnig>
I am trying to use a PLL and I have three examples open- one in Verilog, one in oMigen and one in nmigen. of course I am very confused.
<DaKnig>
wait, this is still zero indexed right? using two different numbering schemes is confusing; the schematics use 1-based indexing (like what's conventional with numbering chip pins)
<whitequark>
it's indexed the same as the pmod standard
<whitequark>
well
<whitequark>
sorry, that was unclear
<DaKnig>
the pmod standard would have `654321 (top) cba987 (bottom)`
<DaKnig>
(preserving right to left numbering)
<DaKnig>
(ofc hex)
<whitequark>
the idea behind this approach is that you use physical indexing (i.e. chip balls, connector pins, whatever) in the board file, and use logical indexing (i.e. lsb, 1st, 2nd, 3rd, msb) when working with the signals request() returns
<whitequark>
so: the signals returned by request() are zero-indexed like usual, which shouldn't matter because by that point you're not working with the pmod connector, you're working with a video DAC
<whitequark>
and the numbers in Pins() are the same as the standard pmod numbering
<DaKnig>
how does it know the "standard pmod numbering"? its written somewhere? or does it decide that somehow from the space-separated pin list in the platform file?
<whitequark>
the latter
<DaKnig>
space separated -> .split() and index from 1 and up. got it.
<whitequark>
yup!
<DaKnig>
so I add a resource representing a daughterboard/new thing connected to the main module, then request it and use that as a Record
<whitequark>
yup
<DaKnig>
because Resource inherits from Record
<DaKnig>
ok
<DaKnig>
a bit more roundabout than what you'd normally do with VHDL/Verilog... wait no, there you do the same in tcl
<whitequark>
yes, and what's worse, you have to redo the work if you ever use a different board
<whitequark>
since the intermediate step--pinout of the pmod itself--is missing
<DaKnig>
here you could ship the Resource as is and just re-route it to the right Connector(s)