cr1901_modern has quit [Read error: Connection reset by peer]
cr1901_modern has joined #nmigen
revolve has quit [Read error: Connection reset by peer]
revolve has joined #nmigen
Degi_ has joined #nmigen
Degi has quit [Ping timeout: 268 seconds]
Degi_ is now known as Degi
cr1901_modern1 has joined #nmigen
cr1901_modern has quit [Ping timeout: 276 seconds]
cr1901_modern1 has quit [Quit: Leaving.]
cr1901_modern has joined #nmigen
<d1b2>
<4o> can a pysim simulation object be deep-copied?
<d1b2>
<4o> context: i'm running hundreds of simulations in parallel. the longest part of a sim run appears to be simulation ctor call. wonder if there are ways to optimize this part e.g. create one simulation then copy-paste it required number of times
<whitequark[m]>
yes
<whitequark[m]>
wait, hm
<whitequark[m]>
sorry, think i planned that but never actually implemented it
<d1b2>
<4o> ok, thanks
<d1b2>
<4o> well i can give a piece of feedback as well. spent 2 hours to write a test wrapper that starts n ^ 6 simulations in separate threads for our server. can't imagine how long would it take to do this with hdl + bash + tcl + modelsim. cincare thanks for nmigen+pysim
<whitequark>
thank you :)
<d1b2>
<Darius> what about forking the process after ctor?
<d1b2>
<Darius> I mean, sure.. it's horrible
<d1b2>
<Darius> but..!
<d1b2>
<4o> each thread creates its own simulation and runs it
<d1b2>
<4o> well otherwise multiple stimuli processes will interface one simulation and thewhole thing will fail
<d1b2>
<Darius> but if you forked it would duplicte the work and then you can tweak each one and then run them on other CPUs
<d1b2>
<Darius> in separate processes
<d1b2>
<4o> is there a python tool that will fork memory as well as execution queue?
<d1b2>
<4o> i'm using threading.Thread and it looks like objects created before forking stay shared
<d1b2>
<Darius> I mean you can call os.fork()
<d1b2>
<Darius> honestly I dunno how much work it would be or what potential breakage you would see though
<d1b2>
<Darius> yes I would expect all the threads would be duplicated to each fork
<d1b2>
<4o> ok, thanks. i'll have a look at it
<d1b2>
<Darius> it would definitely be a kludge 😅
chipmuenk has joined #nmigen
<Sarayan>
4o: any reason not to use cxxsim? It's fast to start and probably way more amenable to instancing
<mindw0rk>
whois Darius
<d1b2>
<Darius> hands mindw0rk a /
<mindw0rk>
d1b2, thanks I needed that :)
pftbest has quit [Remote host closed the connection]
pftbest has joined #nmigen
<d1b2>
<4o> @Sarayan never tried it. Is it faster then pysim?
<d1b2>
<4o> As far as as I understood, it simulates posy-yosys netliwts, right?
Sarayan has quit [Ping timeout: 260 seconds]
revolve has quit [Ping timeout: 252 seconds]
<d1b2>
<Olivier Galibert> It's 10-50 times faster than pysim, it goes through yosys (it's a yosys backend) and generates efficient c++
<d1b2>
<Olivier Galibert> so a minus, you lose python at sim time, and a plus, you lose python at sim time
revolve has joined #nmigen
<d1b2>
<4o> Ok, I'll have a look at it as well
<d1b2>
<DX-MON> 4o//Darius: in this instance, I'd suggest multiprocessing is the Python library you're after, rather than having to manually deal with os.fork()
<d1b2>
<DX-MON> allows you to spawn a bunch of sub-processes (as opposed to threads which involve no os.fork(), which is why the shared state when using threading.Thread)
Bertl_zZ is now known as Bertl
<d1b2>
<Darius> I wondered about multiprocessing but I am not sure you can say "fork but don't exec" with it
jfng has joined #nmigen
<Chips4Makers[m]>
lkclkbeckmann : I will port PDKMaster @ co. to Sky130 indeed. It will be up to efabless to decide if they will accept design not using their harness on their Google sponsored shuttle.
pftbest has quit [Remote host closed the connection]
pftbest has joined #nmigen
revolve has quit [Read error: Connection reset by peer]
revolve has joined #nmigen
bvernoux has joined #nmigen
roamingr1 has quit [Ping timeout: 260 seconds]
feldim2425 has quit [Quit: ZNC 1.8.x-git-91-b00cc309 - https://znc.in]
feldim2425 has joined #nmigen
<lkcl>
whoops space missing there. kbeckmann ^
<lkcl>
Chips4Makers[m]: i've put in an application, we see how far it gets
Bertl is now known as Bertl_oO
<d1b2>
<DX-MON> Multiprocessing works by forking the Python process and continuing execution of some chosen child process path, meaning it's made for this task
pftbest has quit [Remote host closed the connection]
pftbest has joined #nmigen
pftbest has quit [Ping timeout: 260 seconds]
pftbest has joined #nmigen
lambda has joined #nmigen
lambda has quit [Client Quit]
lambda has joined #nmigen
Bertl_oO is now known as Bertl
lambda has quit [Client Quit]
lambda has joined #nmigen
lambda has quit [Client Quit]
roamingr1 has joined #nmigen
roamingr1 has quit [Ping timeout: 240 seconds]
roamingr1 has joined #nmigen
lambda has joined #nmigen
pftbest has quit [Read error: Connection reset by peer]
pftbest has joined #nmigen
revolve has quit [Read error: Connection reset by peer]
revolve has joined #nmigen
hellomeandyou has joined #nmigen
hellomeandyou has quit [Client Quit]
Yehowshua has joined #nmigen
Bertl is now known as Bertl_oO
Yehowshua has quit [Quit: Connection closed]
bvernoux has quit [Quit: Leaving]
Sarayan has joined #nmigen
Yehowshua has joined #nmigen
chipmuenk has quit [Quit: chipmuenk]
Yehowshua has quit [Quit: Ping timeout (120 seconds)]
Yehowshua has joined #nmigen
Yehowshua has quit [Quit: Ping timeout (120 seconds)]
Yehowshua has joined #nmigen
Yehowshua has quit [Quit: Ping timeout (120 seconds)]
Yehowshua has joined #nmigen
revolve has quit [Read error: Connection reset by peer]
revolve has joined #nmigen
nelgau has quit [Ping timeout: 265 seconds]
lf has quit [Ping timeout: 250 seconds]
lf has joined #nmigen
Yehowshua has quit [Quit: Ping timeout (120 seconds)]
<d1b2>
<Darius> yeah but you don't want to exec a child
<d1b2>
<Darius> you just want to do the setup, fork, tweak each child, run simulation
<d1b2>
<Darius> the fork just avoids the setup cost for each copy of the simulation (that is what I understood the problem to be)
<lkcl>
whitequark: ah, got a fascinating one i am completely unsure how to deal with, or if it is even a problem
<lkcl>
some code that was translated from VHDL by hand had an if/else statement
<lkcl>
the m.If was at the correct indentation level
<lkcl>
but the m.Else() had been added at the level of the statements *inside* the m.If() block
<whitequark[m]>
this is a bug
<lkcl>
of course, because contexts are not python if/else statements, there was no error reported
<whitequark[m]>
there should be a diagnostic for that case, rather than it being silently accepted
<lkcl>
ahh :)
<tpw_rules>
didn't i report and get that bug fixed already?
<tpw_rules>
many months ago but still
<lkcl>
it's quite fascinating
<whitequark[m]>
tpw_rules: i vaguely recall something like that
<lkcl>
i love language translators and the whole idea of using context managers is very cool. i have no idea how this would be fixed :)