<mithro> digshadow: So my database build failed on imuxlout
nonlinear has joined #symbiflow
_whitelogger has joined #symbiflow
citypw has joined #symbiflow
bitmorse_ has joined #symbiflow
karol2 has joined #symbiflow
karol2 is now known as kgugala
<nats`> oky mithro got it :)
citypw has quit [Ping timeout: 250 seconds]
bitmorse_ has quit [Remote host closed the connection]
bitmorse_ has joined #symbiflow
bitmorse_ has quit [Ping timeout: 268 seconds]
bitmorse_ has joined #symbiflow
bitmorse_ has quit [Ping timeout: 246 seconds]
bitmorse_ has joined #symbiflow
bitmorse_ has quit [Read error: Connection reset by peer]
bitmorse__ has joined #symbiflow
bitmorse__ has quit [Ping timeout: 240 seconds]
bitmorse__ has joined #symbiflow
bitmorse__ has quit [Ping timeout: 250 seconds]
citypw has joined #symbiflow
bitmorse__ has joined #symbiflow
bitmorse__ has quit [Ping timeout: 245 seconds]
bitmorse__ has joined #symbiflow
bitmorse__ has quit [Ping timeout: 252 seconds]
<mithro> digshadow: I'm blocked on building a new database because of the imuxlout issue
<digshadow> mithro: its not a new issue, but someone is actively looking at it right onw
<digshadow> now
<mithro> Do I just keep rerunning the fuzzer?
<kgugala> I'm looking at that
<kgugala> I think I'm getting closer, but still there are some issues that needs to be resolved
<tpb> Title: [TCL] 072 Memory Leak Split job attempt - Pastebin.com (at pastebin.com)
<nats`> I'm trying something like that for the problem of the 072 fuzzer
<nats`> for what I saw the problem comes from get_nodes function
<nats`> my hope is that the temporary interpreter will end cleaned with everything inside
<nats`> if not we may need to use sort of a bash script on top to call the tcl with good index value
<nats`> and sorry for the crappy tcl but it was a long time I didn't write a line of that... language
<nats`> we will soon have an answer but it may be possible that the main vivado interpreter doesn't clean everything like it should
<litghost> nats: Does that lower the memory usage as expected?
<nats`> litghost, uhhmm still have to wait
<nats`> it doesn't fall after each interpreter delete but apparently GC couyld take care of that later when memory needed
<nats`> but I think it'll not
<nats`> let's see
<nats`> in the worst case i'll split it at shell script level
<litghost> How are you measuring peak memory usage? I've been using "/usr/bin/time -v <program>"
<nats`> I'm only looking at top
<nats`> is it enough ?
<nats`> by the way the memory module of tcl is not available in the vivado shell
<nats`> I guess they didn't compile with the good flag
<litghost> :(
<nats`> do you think my code is clean enough in tcl ?
<litghost> using /usr/bin/time provides a nice summary of memory usage and CPU time
<litghost> tcl looks fine
<litghost> We should write a comment about why the extra complexity exists, but hold off until it's proven to work
<nats`> sure
<nats`> I made a lot of test
<nats`> and it apperas that calling get_nodes no matter the way you do it eat memory until your close the vivado tcl shell
<litghost> Likely vivado is opening a bunch of internal data structures, and doesn't close them
<tpb> Title: [TCL] get_nodes leak ? - Pastebin.com (at pastebin.com)
<nats`> certainly
<litghost> For systems with plenty of memory, that is likely a good choice
<nats`> int hat simple loop it still eat all the ram even with explicit unset
<nats`> what is a lot ? :D
<litghost> I have 128 GB, runs fine on a 50k part
<litghost> However there are 100k and 200k and higher parts
<litghost> At some point even my system will fall over
<nats`> sure
<nats`> what is worrying me is that if it doesn't work with slave interpreter there are some huge problem in their tcl interpreter
<nats`> because a slave interpreter is deleted with all his context
<nats`> at least it should
<litghost> The get_nodes is their interface, there could actually be a non-tcl leak present in that interface
<nats`> but if I'm not wrong GC implementation of tcl is free to manufacturer
<nats`> uhhmmm good point !
<litghost> We are also using a fairly old Vivado version, so its possible this bug was already fixed in a newer version
<nats`> something usual with C wrapper
<nats`> I also found something interesting about old vivado
<tpb> Title: Memory Leak in Vivado TCL - Community Forums3rd Party Header & Footer (at forums.xilinx.com)
<nats`> they added a parameter to not pipe all puts through vivado core
<nats`> should exploded soon
<nats`> ..
<nats`> bang
<nats`> it was auto killed by linux :D
<nats`> Block: 17
<nats`> StartI: 5844379 - StopI: 6188166
<nats`> inside interpreter 5844379 6188166
<nats`> WARNING: [Vivado 12-2548] 'get_pips' without an -of_object switch is potentially runtime- and memory-intensive. Please consider supplying an -of_object switch. You can also press CTRL-C from the command prompt or click cancel in the Vivado IDE at any point to interrupt the command.
<nats`> get_pips: Time (s): cpu = 00:00:09 ; elapsed = 00:00:09 . Memory (MB): peak = 16285.488 ; gain = 510.996 ; free physical = 307 ; free virtual = 459
<nats`> /home/nats/Xilinx/Vivado/2017.2/bin/loader: line 179: 10787 Killed "$RDI_PROG" "$@"
<nats`> so I guess a good mitigating solution would be to make an overlay either with an other tcl script or with a bash script to start block processing with good index
<nats`> bash script seems coherent with the build process we use in fuzzer
<nats`> uhhmmm I can test with 2017.4 I have it installed !
<nats`> nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ source /home/nats/Xilinx/Vivado/201
<nats`> (env) nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ source /home/nats/Xilinx/Vivado/2017.4/settings64.sh
<nats`> (env) nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ vivado -mode batch -source split_job.tcl
<nats`> 2016.4/ 2017.2/ 2017.4/
<nats`> let's try
<nats`> I may have a 2018 install too
<nats`> I should present the bill for the occupied hard drive to xilinx :)
<litghost> Anyways, I'm okay with a bash based approach
<litghost> You might want a two phase approach, where you identify the number of pips and then delegate to each vivado instance
<litghost> Much like your interpreter approach
<nats`> yep :)
<nats`> and I was thinking each block to a different file
<nats`> downhill_index.txt
<nats`> downhill_$index.txt
<litghost> ah sure, and then concat them
<nats`> it can be easily merged after and would avoid to generate tons of multiGB text file on hardrive
<litghost> FYI you could move ordered wires before https://github.com/SymbiFlow/prjxray/tree/master/fuzzers/073-get_counts and use that for the pip count
<tpb> Title: prjxray/fuzzers/073-get_counts at master · SymbiFlow/prjxray · GitHub (at github.com)
<nats`> uhhmm I suggest I do things step by step because i'm really new in the project :)
<nats`> I don't want to make mistake and break things :)
<litghost> sure
<nats`> by the way I just realized WARNING: [Vivado 12-2683] No nodes matched 'get_nodes -uphill -of_object INT_R_X1Y149/INT_R.WW4END_S0_0->>WW2BEG3'
<nats`> could it be a problem when you try to get_nodes on a failed match ?
<litghost> that is fine, some pips are disconnected
<nats`> you know usual not covered free on return ?
<nats`> oky time to go to bed for me but i'll write the bash version tomorrow and test it before submitting it to Push request
<nats`> and i'll so fix 072-074 fuzzer build
<nats`> I already wrote it but can't test
<nats`> good night
<litghost> nats: Sounds good. As you go, it would be good to fix https://github.com/SymbiFlow/prjxray/issues/171
<tpb> Title: All output products should go into "build" dir · Issue #171 · SymbiFlow/prjxray · GitHub (at github.com)
<litghost> Especially if you are adding more intermediates
<nats`> litghost, that's what i'm fixing :)
<nats`> that's because of that one i'm patching 072 because I have patch for 71-74
<nats`> 71 is merged but the other are dependant on 072 for testing
<nats`> just efore going to bed I think I could start different interpreter in parallel for 072 so make a compromise to get more speed
<nats`> talking tomorrow :)
<nats`> good night
tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow