<GenTooMan>
I found a yosys bug report in June, no wonder my verilog stuff doesn't work. Sorry had to be said. It optimizes out my psuedo write code, then the bug occurs as it expects ONLY a read after write not just a read, or read before write.
<GenTooMan>
next to implement write only memory...
dys has quit [Ping timeout: 250 seconds]
<emeb>
I've got a bogus datasheet for "Nominal Semidestructor Write-Only Memory"
Cerpin has quit [Ping timeout: 258 seconds]
pie_ has joined #yosys
pie_ has quit [Ping timeout: 250 seconds]
phire has joined #yosys
SpaceCoaster has quit [Read error: Connection reset by peer]
Cerpin has joined #yosys
dys has joined #yosys
Jybz has joined #yosys
sorear has joined #yosys
Jybz has quit [Quit: Konversation terminated!]
emeb_mac has joined #yosys
<pepijndevos[m]>
Which part of yosys detects bram? Is that done by the parser or by some extraction pass? I think the latter, right?
Cerpin has quit [Ping timeout: 245 seconds]
<daveshah>
pepijndevos[m]: The parser creates `$memrd`, `$memwr` and `$meminit`
<daveshah>
`memory_dff` folds `$dff`s into these
<daveshah>
`memory_collect` converts these into combined `$mem` multiport memories
<daveshah>
`memory_bram` maps `$mem` to bram according to arch-specific rules (usually followed by `techmap` to map to arch primitives exactly)
<pepijndevos[m]>
Ah OK, but if ghdl just creates a bunch of dff, there is no pass that translates those to memory cells
<pepijndevos[m]>
Haven't tried yet, but I suspect ghdl does not yet have any special logic to infer memory.
<pepijndevos[m]>
Of course an alternative would be to blackbox the vendor primitive.
<pepijndevos[m]>
Or by fold into you mean memory_dff will actually create new cells, rather than fold dff into adjacent memory cells?
<pepijndevos[m]>
I'll look at the docs and ghdl tomorrow
<daveshah>
No, Yosys can't create memories out of DFFs and muxes
<daveshah>
`memory_dff` is only for folding dffs to create clocked read/write ports
<daveshah>
You will need to modify ghdl to create read and write port cells
<pepijndevos[m]>
Alright
<shapr>
howdy pepijndevos[m] !
<shapr>
I've enjoyed your blog posts!
pie_ has joined #yosys
GoldRin has quit [Ping timeout: 245 seconds]
<somlo>
can yosys take advantage of multithreading (e.g., on a multi core machine)?
<ZirconiumX>
somlo: no
<somlo>
is it the nature of the workload that lacks parallelism, or just a thing nobody had a chance to get around implementing -- yet?
<ZirconiumX>
somlo: Partly A, partly that the algorithms Yosys uses are not well-suited to parallelism
<shapr>
I'd expect nextpnr and arachne would be GREAT with a bunch of cores
<ZirconiumX>
arachne is all but dead
* shapr
updates his cache
<somlo>
I dug around the git log of nextpnr and saw a bunch of commits mentioning multithreading, so I'm hopeful :)
<somlo>
for now I'm watching yosys use one of my 8 rv64gc qemu VCPUs (building ecp5 bitstream for litex+rocket-rv64gc "natively")
<somlo>
and started wondering if there's any way to make it faster :)
<ZirconiumX>
Yosys does lots of little things to optimise it
<ZirconiumX>
So there's very little to gain by parallelising one specific pass
<shapr>
is nextpnr simulated annealing?
<ZirconiumX>
I think that's one of the algorithms it can use
<shapr>
ok now you got me interested
<ZirconiumX>
nextpnr was born because arachne was too hard-coded for iCE40
<ZirconiumX>
It couldn't handle ECP5 without major rework
<ZirconiumX>
Thus nextpnr was built as a framework
<shapr>
spiffy
<somlo>
I noticed (from a user perspective) how yosys does multiple "passes" or "stages" - I'm assuming those can't be pipelined either, one's got to finish before its complete output can be used by the next stage...
<ZirconiumX>
somlo: Correct
<somlo>
and if I had to guess, most of the stages are heavy on graph manipulation, so maybe even if one could throw parallelism at it, the overhead might not be worth the trouble
emeb has quit [Quit: Leaving.]
tpb has quit [Remote host closed the connection]
tpb has joined #yosys
<shapr>
I wonder
<shapr>
I know darpa did a graph speedup challenge a few years back