01:18
<
mithro >
digshadow: So my database build failed on imuxlout
01:44
nonlinear has joined #symbiflow
02:17
_whitelogger has joined #symbiflow
03:58
citypw has joined #symbiflow
08:54
bitmorse_ has joined #symbiflow
09:26
karol2 has joined #symbiflow
09:26
karol2 is now known as kgugala
09:47
<
nats` >
oky mithro got it :)
10:17
citypw has quit [Ping timeout: 250 seconds]
10:26
bitmorse_ has quit [Remote host closed the connection]
10:27
bitmorse_ has joined #symbiflow
11:16
bitmorse_ has quit [Ping timeout: 268 seconds]
11:21
bitmorse_ has joined #symbiflow
11:27
bitmorse_ has quit [Ping timeout: 246 seconds]
11:27
bitmorse_ has joined #symbiflow
12:12
bitmorse_ has quit [Read error: Connection reset by peer]
12:12
bitmorse__ has joined #symbiflow
12:42
bitmorse__ has quit [Ping timeout: 240 seconds]
13:00
bitmorse__ has joined #symbiflow
13:12
bitmorse__ has quit [Ping timeout: 250 seconds]
13:25
citypw has joined #symbiflow
13:29
bitmorse__ has joined #symbiflow
13:42
bitmorse__ has quit [Ping timeout: 245 seconds]
13:58
bitmorse__ has joined #symbiflow
14:12
bitmorse__ has quit [Ping timeout: 252 seconds]
17:34
<
mithro >
digshadow: I'm blocked on building a new database because of the imuxlout issue
17:37
<
digshadow >
mithro: its not a new issue, but someone is actively looking at it right onw
17:38
<
mithro >
Do I just keep rerunning the fuzzer?
17:39
<
kgugala >
I'm looking at that
17:39
<
kgugala >
I think I'm getting closer, but still there are some issues that needs to be resolved
22:00
<
tpb >
Title: [TCL] 072 Memory Leak Split job attempt - Pastebin.com (at pastebin.com)
22:01
<
nats` >
I'm trying something like that for the problem of the 072 fuzzer
22:01
<
nats` >
for what I saw the problem comes from get_nodes function
22:01
<
nats` >
my hope is that the temporary interpreter will end cleaned with everything inside
22:01
<
nats` >
if not we may need to use sort of a bash script on top to call the tcl with good index value
22:02
<
nats` >
and sorry for the crappy tcl but it was a long time I didn't write a line of that... language
22:06
<
nats` >
we will soon have an answer but it may be possible that the main vivado interpreter doesn't clean everything like it should
22:15
<
litghost >
nats: Does that lower the memory usage as expected?
22:20
<
nats` >
litghost, uhhmm still have to wait
22:20
<
nats` >
it doesn't fall after each interpreter delete but apparently GC couyld take care of that later when memory needed
22:20
<
nats` >
but I think it'll not
22:21
<
nats` >
in the worst case i'll split it at shell script level
22:21
<
litghost >
How are you measuring peak memory usage? I've been using "/usr/bin/time -v <program>"
22:21
<
nats` >
I'm only looking at top
22:21
<
nats` >
is it enough ?
22:21
<
nats` >
by the way the memory module of tcl is not available in the vivado shell
22:22
<
nats` >
I guess they didn't compile with the good flag
22:22
<
nats` >
do you think my code is clean enough in tcl ?
22:22
<
litghost >
using /usr/bin/time provides a nice summary of memory usage and CPU time
22:22
<
litghost >
tcl looks fine
22:23
<
litghost >
We should write a comment about why the extra complexity exists, but hold off until it's proven to work
22:23
<
nats` >
I made a lot of test
22:23
<
nats` >
and it apperas that calling get_nodes no matter the way you do it eat memory until your close the vivado tcl shell
22:24
<
litghost >
Likely vivado is opening a bunch of internal data structures, and doesn't close them
22:24
<
tpb >
Title: [TCL] get_nodes leak ? - Pastebin.com (at pastebin.com)
22:24
<
litghost >
For systems with plenty of memory, that is likely a good choice
22:24
<
nats` >
int hat simple loop it still eat all the ram even with explicit unset
22:24
<
nats` >
what is a lot ? :D
22:25
<
litghost >
I have 128 GB, runs fine on a 50k part
22:25
<
litghost >
However there are 100k and 200k and higher parts
22:25
<
litghost >
At some point even my system will fall over
22:25
<
nats` >
what is worrying me is that if it doesn't work with slave interpreter there are some huge problem in their tcl interpreter
22:26
<
nats` >
because a slave interpreter is deleted with all his context
22:26
<
nats` >
at least it should
22:26
<
litghost >
The get_nodes is their interface, there could actually be a non-tcl leak present in that interface
22:26
<
nats` >
but if I'm not wrong GC implementation of tcl is free to manufacturer
22:26
<
nats` >
uhhmmm good point !
22:26
<
litghost >
We are also using a fairly old Vivado version, so its possible this bug was already fixed in a newer version
22:26
<
nats` >
something usual with C wrapper
22:27
<
nats` >
I also found something interesting about old vivado
22:27
<
tpb >
Title: Memory Leak in Vivado TCL - Community Forums3rd Party Header & Footer (at forums.xilinx.com)
22:27
<
nats` >
they added a parameter to not pipe all puts through vivado core
22:31
<
nats` >
should exploded soon
22:31
<
nats` >
it was auto killed by linux :D
22:32
<
nats` >
StartI: 5844379 - StopI: 6188166
22:32
<
nats` >
inside interpreter 5844379 6188166
22:32
<
nats` >
WARNING: [Vivado 12-2548] 'get_pips' without an -of_object switch is potentially runtime- and memory-intensive. Please consider supplying an -of_object switch. You can also press CTRL-C from the command prompt or click cancel in the Vivado IDE at any point to interrupt the command.
22:32
<
nats` >
get_pips: Time (s): cpu = 00:00:09 ; elapsed = 00:00:09 . Memory (MB): peak = 16285.488 ; gain = 510.996 ; free physical = 307 ; free virtual = 459
22:32
<
nats` >
/home/nats/Xilinx/Vivado/2017.2/bin/loader: line 179: 10787 Killed "$RDI_PROG" "$@"
22:33
<
nats` >
so I guess a good mitigating solution would be to make an overlay either with an other tcl script or with a bash script to start block processing with good index
22:34
<
nats` >
bash script seems coherent with the build process we use in fuzzer
22:34
<
nats` >
uhhmmm I can test with 2017.4 I have it installed !
22:36
<
nats` >
nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ source /home/nats/Xilinx/Vivado/201
22:36
<
nats` >
(env) nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ source /home/nats/Xilinx/Vivado/2017.4/settings64.sh
22:36
<
nats` >
(env) nats@nats-MS-7A72:~/project/symbiflow/testing/072_ram_optim$ vivado -mode batch -source split_job.tcl
22:36
<
nats` >
2016.4/ 2017.2/ 2017.4/
22:36
<
nats` >
I may have a 2018 install too
22:37
<
nats` >
I should present the bill for the occupied hard drive to xilinx :)
22:38
<
litghost >
Anyways, I'm okay with a bash based approach
22:38
<
litghost >
You might want a two phase approach, where you identify the number of pips and then delegate to each vivado instance
22:39
<
litghost >
Much like your interpreter approach
22:39
<
nats` >
and I was thinking each block to a different file
22:39
<
nats` >
downhill_index.txt
22:39
<
nats` >
downhill_$index.txt
22:39
<
litghost >
ah sure, and then concat them
22:39
<
nats` >
it can be easily merged after and would avoid to generate tons of multiGB text file on hardrive
22:40
<
tpb >
Title: prjxray/fuzzers/073-get_counts at master · SymbiFlow/prjxray · GitHub (at github.com)
22:40
<
nats` >
uhhmm I suggest I do things step by step because i'm really new in the project :)
22:40
<
nats` >
I don't want to make mistake and break things :)
22:41
<
nats` >
by the way I just realized WARNING: [Vivado 12-2683] No nodes matched 'get_nodes -uphill -of_object INT_R_X1Y149/INT_R.WW4END_S0_0->>WW2BEG3'
22:41
<
nats` >
could it be a problem when you try to get_nodes on a failed match ?
22:41
<
litghost >
that is fine, some pips are disconnected
22:41
<
nats` >
you know usual not covered free on return ?
22:43
<
nats` >
oky time to go to bed for me but i'll write the bash version tomorrow and test it before submitting it to Push request
22:44
<
nats` >
and i'll so fix 072-074 fuzzer build
22:44
<
nats` >
I already wrote it but can't test
22:49
<
tpb >
Title: All output products should go into "build" dir · Issue #171 · SymbiFlow/prjxray · GitHub (at github.com)
22:49
<
litghost >
Especially if you are adding more intermediates
22:57
<
nats` >
litghost, that's what i'm fixing :)
22:57
<
nats` >
that's because of that one i'm patching 072 because I have patch for 71-74
22:57
<
nats` >
71 is merged but the other are dependant on 072 for testing
22:58
<
nats` >
just efore going to bed I think I could start different interpreter in parallel for 072 so make a compromise to get more speed
22:58
<
nats` >
talking tomorrow :)
23:53
tpb has quit [Remote host closed the connection]
23:53
tpb has joined #symbiflow