_whitelogger has joined #symbiflow
OmniMancer has joined #symbiflow
Bertl is now known as Bertl_zZ
lopsided98 has quit [Quit: No Ping reply in 180 seconds.]
lopsided98 has joined #symbiflow
kgugala has quit [Quit: WeeChat 1.9.1]
citypw has joined #symbiflow
<duck2> so i left `make buttons_basys3_bin` running overnight and woke up to frozen laptop and some corrupted files
<sf-slack1> <kgugala> it may consume a lot of RAM
<sf-slack1> <kgugala> how much memory do you have on your laptop?
<duck2> i think mem + swap were full so computer just went derp. how much RAM do i need to build the archs?
<duck2> 4gb :(
<sf-slack1> <kgugala> memory consumption depends on the project
<sf-slack1> <kgugala> but 4gb is way to low
<sf-slack1> <kgugala> we will optimize the flow to consume less memory
<sf-slack1> <kgugala> it's on the TODO list
<sf-slack1> <kgugala> duck2: do you want yo take a look on that?
<duck2> i think the .xml files are generated but vpr just makes all things break in seconds, and computer acts weird when rebooted unlike the other times i went oom
<duck2> i wonder if it tries to load a huge file into memory
<duck2> kgugala: the arch generation or vpr? arch generation also takes a lot of memory but it passes through when it's the only thing running. i also tried to run prjxray_routing_import.py with pypy and it was ok i think
<sf-slack1> <kgugala> vpr creates a lot of data structures in RAM
<sf-slack1> <kgugala> this could be the reason why your laptop cannot handle it
<duck2> i agree. i can try and profile it while restricting its memory. i know that pnr is a hard operation but it's the first time my laptop is humiliated like this and i'm offended :D
<sf-slack1> <kgugala> :slightly_smiling_face:
<daveshah> The PnR part shouldn't be that memory intensive
<daveshah> nextpnr can do PnR for an 85k logic element ECP5 in ~200MB
<daveshah> Admittedly the database build is quite memory intensive, but that's a one-off operation
<duck2> i thought that too, most memory intensive processes fill up my ram and freeze the gui but don't corrupt my /usr/bin on a reboot. this is interesting
<daveshah> I suspect that's more bad luck that anything VPR specific
<daveshah> Perhaps because it was doing disk operations at the same time as running out of RAM
<duck2> looks like it. i really shouldn't have run it twice i think. will fix debian and give it another shot
<litghost> duck2: Ya, the current XML read method in VPR consumes quiet a bit of RAM. I typically measure ~7 GB. This can be reduced in the future, but is what we have right now
Bertl_zZ is now known as Bertl
citypw has quit [Ping timeout: 272 seconds]
citypw has joined #symbiflow
<sf-slack1> <acomodi> SLICEM modes update: I have finally managed to get slicem carrychain to be packed. I'll clean up and open a PR with WIP as I still need to test that everything else works fine and is not affected by the changes
OmniMancer has quit [Quit: Leaving.]
<litghost> acomodi: Awesome, what ended up changing?
<sf-slack1> <acomodi> The way in which `try_intra_lib_nets` computes `router_data`: every time there is a mode conflict between `lib_nets` I set as `illegal` the mode that cannot be selected by one of the childs of a pb (in our case the `SLICEM_MODES` pb_type) and try to route again, this time without considering the `illegal` mode (in our case the illegal mode was `DRAMs` as no {N}_LUT had to actually be set to DRAM mode, therefore, in
<sf-slack1> the end LUTs mode will be selected)
<litghost> acomodi: Did you test packing a DRAM with a LUT with the new code?
<litghost> acomodi: Otherwise sounds good
<litghost> acomodi: One thing to start working on is a VPR only test case. As we are going to start upstreaming these changes, please add the smallest test case you can manage that tests this feature
<sf-slack1> <acomodi> litghost: Ok, so it will be necessary to use a small test architecture instead of a whole a7 right?
<litghost> acomodi: Ya, that's the idea. Hopefully your mind is in this space enough to be able to create a reduced test case
<litghost> acomodi: I'd put up the PR with the VPR changes when you can so we can start the review cycle, but I do want to start adding test cases for the VPR changes
<sf-slack1> <acomodi> litghost: Ok, sounds good, I'll start dealing with it then. Anyway, by packing DRAM with LUT you mean running one of the dram codes or having a SLICEM with both LUT and DRAMs?
<litghost> acomodi: Later. You should be able to leverage the existing pack resource tests we have in xc7/tests/dram
<litghost> acomodi: Just add a new test case
<sf-slack1> <acomodi> litghost: All right
GuzTech has quit [Remote host closed the connection]
<duck2> hello from fresh install ^^ i plan to do the rest of the builds in a memory-limited container and hope only the container breaks on oom. then i should profile and see if there is a way to make it through except by downloading more ram or throwing swap at it. as for the XML arch generation and VPR, did anyone notice any obvious pain points of memory
<duck2> usage? so that i have somewhere to start
<litghost> duck2: VPR currently loads the entire XML rr graph into memory, which is relatively wasteful: https://github.com/SymbiFlow/vtr-verilog-to-routing/blob/master%2Bwip/vpr/src/route/rr_graph_reader.cpp
<tpb> Title: vtr-verilog-to-routing/rr_graph_reader.cpp at master+wip · SymbiFlow/vtr-verilog-to-routing · GitHub (at github.com)
<litghost> duck2: Switching to an incremental reader will likely save some memory
<litghost> I believe to was significant
<mithro> duck2: Plenty of room for memory optimization, but it is easier to optimize when you have a working solution - so correctness has been the priority
Bertl is now known as Bertl_oO
tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow