<mithro> How does someone connect to the slack?
citypw has joined #symbiflow
proteusguy has quit [Remote host closed the connection]
proteusguy has joined #symbiflow
OmniMancer has joined #symbiflow
_whitelogger has joined #symbiflow
OmniMancer has quit [Read error: Connection reset by peer]
OmniMancer has joined #symbiflow
citypw has quit [Ping timeout: 244 seconds]
citypw has joined #symbiflow
AnishGG has joined #symbiflow
AnishGG has quit [Client Quit]
AnishGG has joined #symbiflow
AnishGG has quit [Client Quit]
citypw has quit [Ping timeout: 250 seconds]
<sf-slack> <acomodi> https://github.com/SymbiFlow/vtr-verilog-to-routing/pull/9/commits/b61da9718d1f1034b19270d2eefd9f51e05df0ea: with this commit I have separated the chain and non-chain atom prepacking. There is duplicated code which maybe should be avoided, but for now Murax is built and picosoc is taking ages for the routing process (currently it is at the 7th iteration)
<sf-slack> <kgugala> @mithro just go to symbiflow.slack.com and register
<sf-slack> <kgugala> @acomodi does this fix the segfault issue?
<sf-slack> <acomodi> @kgugala I couldn't actually replicate the segfault issue, I have tried to build the `chain_packing` test with non-multiple blocks and it didn't encounter any segfault issue
<sf-slack> <acomodi> Anyways, I believe that picosoc is probably hard to route within the ROI
proteusguy has quit [Ping timeout: 246 seconds]
<sf-slack> <acomodi> This is the current routing state: https://pastebin.com/ws9CKc5z
<tpb> Title: ---- ------ ------- ------- ------- ------- ----------------- -------- Iter - Pastebin.com (at pastebin.com)
<sf-slack> <acomodi> For picosoc
proteusguy has joined #symbiflow
<sf-slack> <acomodi> Picosoc got to the end of routing, now there is another issue (the very last one I suppose): https://pastebin.com/jyUj8GJe
<tpb> Title: Traceback (most recent call last): File "/home/build/acomodi/symbiflow-arch-d - Pastebin.com (at pastebin.com)
<sf-slack> <mkurc> @litghost: I checked the possibility to have DRAMs with explicit clock inversion (by instantiating DRAM and DRAM_1 for inversion). It turns out that not all of DRAM primitives have a "_1" counterparts. But all of them has a "IS_WCLK_INVERTED" parameter. Here is a synthesis log, I tried to instantiate all combinations, these are the ones that succeeded: https://pastebin.com/raw/1LATrh9i
OmniMancer has quit [Quit: Leaving.]
<sf-slack> <kgugala> @acomodi your error looks like there is something missing in the database
<sf-slack> <acomodi> Yes indeed I have filed an issue here: https://github.com/SymbiFlow/prjxray/issues/677
<tpb> Title: BRAM SDP_WRITE_WIDTH_36 not found when building bram_sdp_test · Issue #677 · SymbiFlow/prjxray · GitHub (at github.com)
<sf-slack> <acomodi> And for the brk hard blocks issue my first guess would be that routing went beyond the ROI
citypw has joined #symbiflow
<litghost> Ah, the sdp_write_width is a new bit. Rerun the Bram fuzzers until a new db is pushed
<litghost> mkurc: Okay, let's add back the WCLK parameter and call the yosys side good to go
<litghost> As for the bike, that may be a ppip, I'll take a look
<litghost> Brk
<litghost> acomodi: BRKH_INT_X5Y99 is indeed outside of the ROI
<sf-slack> <acomodi> Yes, it is just between clk region X0Y1 and X0Y2
<litghost> acomodi: Chances are it just wants a bounce
<litghost> acomodi: Have you visualized the output with fasm2pips?
<sf-slack> <acomodi> Not yet, I have another run which is about to finish soon and I'll give it a look
<litghost> acomodi: Did you rerun fuzzer 025?
<litghost> To pick up the SDP bit/
<sf-slack> <acomodi> litghost: No, unfortunately I have started the run before your suggestion, I am also about to start another one with the SDP as soon as the fuzzer completes. It will take a while to route picosoc (~1 hour) for the routing takes 18 iterations
<sf-slack> <acomodi> BTW I am also adding an option to enable round_robin_packing
<litghost> acomodi: Try using setting VPR_NUM_WORKERS=64
<litghost> acomodi: It's an enviroment variable
<litghost> Change 64 to a better #
<litghost> acomodi: Also make sure you are running the release build
<sf-slack> <acomodi> litghost: Ok. What you mean by release build?
<litghost> acomodi: Run "grep BUILD CMakeCache.txt" in the vtr build dir
<sf-slack> <acomodi> litghost: All clear, thanks
<litghost> acomodi: I think with that latest change, picosoc is correctly placed and routed, plus or minus timing issues
<litghost> acomodi: One think to examine is rosource utilization between yosys+vpr and vivado
<litghost> acomodi: Also are you using the DRAM progmem.v or the BRAM progmem.v?
<litghost> acomodi: If we are still using the DRAM progmem.v that could explain why the routing is so hard
<sf-slack> <acomodi> litghost: yes, and make sure about that the segfault issue is completely solved
<sf-slack> <acomodi> litghost: so I have been using the latest master
<sf-slack> <mkurc> @litghost: Hi there, Right now the picosoc program is stored in DRAM
<sf-slack> <mkurc> @litghost: But as far as I remember Yosys did infer one BRAM for som other RAM memory in the design
<sf-slack> <acomodi> @litghost, @mkurc: Ok, than it perfectly makes sense, there is a lot of logic used to store the progmem, probably it is better to move it to BRAMs I guess
<sf-slack> <mkurc> @acomodi You can examine what resources are used via Yosys. Run the Yosys manually (in interactive mode), import the .eblif file and issue the "stat" command. You should get a list of primitives with counts
<sf-slack> <mkurc> @litghost: If it is a priority I can make the picosoc use BRAM(s) starting from tomorrow morning.
<litghost> mkurc: Probably a good plan. FYI after reverting the DRAM IS_WCLK_INVERTED change, I think we might be ready to cleanup and push the yosys changes upstream. I did a diff after merging from master, and it looked pretty good
<sf-slack> <mkurc> @litghost: Ok, I'll look into it too.
<sf-slack> <acomodi> litghost, @mkurc: this is the utilization https://pastebin.com/9EdhGE1R
<tpb> Title: === basys3_demo === Number of wires: 16140 Number of wire - Pastebin.com (at pastebin.com)
<litghost> 64 DRAM's seems like a low number. That should pack into 32 SLICEM's
<sf-slack> <mkurc> Yes... Maybe the Yosys did actually infer the BRAM and packed the program into it. There is one BRAM there.
<sf-slack> <mkurc> Currently there are 759 32-bit words in the program memory
<sf-slack> <acomodi> `round_robin_packing` is now optional and can be set with a flag, I will add the option to the `xc7 arch-defs`
<sf-slack> <acomodi> litghost: segfault should be now handled with latest commit on PR#9
citypw has quit [Ping timeout: 259 seconds]
<elms> It's pretty clunky to work on a change that requires changes vtr and arch-defs. Makes me think a submodule make be better than conda if we foresee a lot more co-development.
<elms> I guess the downside is build time increase.
<elms> mithro: litghost: do you know when conda packages are rebuilt (eg vtr)?
<litghost> elms: Agreed. conda packages are rebuilt when you bump the conda-packages repo
<litghost> elms: For co-dev, I use the env var override for local development
<elms> litghost: how do you easily set that?
<litghost> export VPR=<path to vpr binary>
<litghost> same as pre-cmake system
<litghost> I generally have both YOSYS and VPR always set
<elms> does that work for genfasm?
<litghost> yep
<litghost> all binaries from conda can be overridden using the env var
<elms> cool I thought I tried that but maybe I switched terminals
<litghost> I have it set in my bashrc
<litghost> Anyone have experience with 7-series tristate output and tristate feedback?
lopsided98_ is now known as lopsided98
<sf-slack> <acomodi> litghost: Travis CI build has passed with PR#9 of VTR, tomorrow I'll try to solve the brk issue, but I guess it is good to go now
tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow