<pepijndevos_>
I'm thinking... what if I don't bake the source code into the docker file? Just the vendor installation. Then I don't have to rebuild it at all and can just pull it, run fuzzing, generate python package
<pepijndevos_>
you can just mount the source dir in the image right...
<pepijndevos_>
experimenting...
<pepijndevos_>
shit I need to set up docker properly...
<pepijndevos_>
waaah, in what folder is the action running??
<pepijndevos_>
python: can't open file 'setup.py': [Errno 2] No such file or directory
<omnitechnomancer>
add pwd?
<pepijndevos_>
i'm just an idiowho forgets to commit things
<omnitechnomancer>
ah
<pepijndevos_>
if: startsWith(github.ref, 'refs/tags') just never seems to trigger... wtf
<omnitechnomancer>
hmmmm
<pepijndevos_>
Okay found it... I was triggering on commits, but pushing tags is actually a seperate things, so now it runs on both, which is a bit stupid but okay
<omnitechnomancer>
tags are not commits
<pepijndevos_>
yea okay, but now if you push a commit and a tag, CI runs twice, once for the commit, and once for the tag, and only the tag pushes to PyPi. Oh well...
<pepijndevos_>
the good news is it works, and is half an hour faster than before, and now I can start doing what I was planning on doing before I got in this yak shaving: nextpnr stuff
<pepijndevos_>
Although my brain is a bit fried by now, so probably not getting much else done today
<pepijndevos_>
What is slightly concerning is... GW1N-1 finishes in 15 min, GW1N-9 in 1 h, imagine what GW2A-55 will do... gonna need that 6 hour runtime probably...
<Lofty>
pepijndevos_: what's the runtime measuring?
<pepijndevos_>
Lofty, just how long fuzzing takes on CI.
<Lofty>
Time to get optimising then~
<trabucayre>
python is good, but python is slow ;-)
<pepijndevos_>
Nah, it's completely dominated by vendor pnr runtime
<trabucayre>
why not using apicula instead of vendor tool ? :)
<trabucayre>
(ouroboros)
<Lofty>
Okay but seriously, there's a lot that can be computed ahead of time
<pepijndevos_>
The majority of fuzzing runtime is spent on clock routing, all the other stuff finishes in minutes. Clock routing is fairly arbitrary and uncontrollable, so for each column and each clock and each quadrant you need to figure out how it connects.
<pepijndevos_>
Since you can't constrain or place a pip to my knowledge, you can't test multiple clock routes in one run because they end up influencing each other
<pepijndevos_>
So you basically end up sweeping a single DFF across a row, and the repeat for every clock and every quadrant. That's like 20*8*4 runs for branches alone. Plus like a hundred runs to figure out the quadrants in the first place, and a dozen runs to figure out the center muxes.
<pepijndevos_>
The only option to optimize that that I see is binary-search the quadrants, but that makes it sequential instead of parallel.
<pepijndevos_>
Or hardcode a bunch of stuff...
<pepijndevos_>
I mean, all of it can be computed ahead of time, if the db is deterministic. But yea that's basically hardcoding stuff, which will probably come back to bite you when you add new devices.
<pepijndevos_>
On my machine it actually completes much faster because it has a lot of cores. On CI it appears to run with 2 cores, so takes much longer.
<pepijndevos_>
Hmmmmmm, once I have the quadrants, I could actually cut fuzzing time for the branches in 4 by simultaneously fuzzing one DFF in each quadrant. Kinda risky if it decides to screw with which clock is used or something dumb.
<omnitechnomancer>
yea vendor PNR is quite slow
<pepijndevos_>
In my case I constrain virtually everything though... so I mean, in theory it just needs to convert my constraints to a bitstream, but... obviously not
FabM has quit [Quit: Leaving]
<omnitechnomancer>
Well I mean the Anlogic tools are not the fastest even feeding them post pnr input and asking for a bitstream