tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow
Degi_ has joined #symbiflow
borisnotes has quit [Ping timeout: 258 seconds]
Degi has quit [Ping timeout: 250 seconds]
Degi_ is now known as Degi
gsmecher has quit [Ping timeout: 258 seconds]
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
Degi has quit [Ping timeout: 240 seconds]
Degi has joined #symbiflow
az0re has quit [Remote host closed the connection]
citypw has joined #symbiflow
zkms has quit [Quit: zkms]
zkms has joined #symbiflow
Bertl is now known as Bertl_zZ
FFY00 has quit [Remote host closed the connection]
FFY00 has joined #symbiflow
FFY00 has quit [Remote host closed the connection]
FFY00 has joined #symbiflow
tcal has quit [Ping timeout: 265 seconds]
tcal has joined #symbiflow
futarisIRCcloud has joined #symbiflow
_whitelogger has joined #symbiflow
az0re has joined #symbiflow
PiaMuehlen has joined #symbiflow
PiaMuehlen has quit [Client Quit]
robert2 has joined #symbiflow
robert2 has left #symbiflow [#symbiflow]
rw1nkler has joined #symbiflow
<_whitenotifier-9> [symbiflow-arch-defs] rw1nkler opened issue #1442: OpenTitan support - https://git.io/JfLtA
wavedrom has quit [Ping timeout: 240 seconds]
<_whitenotifier-9> [edalize] acomodi opened issue #37: Rebase symbiflow branch on current upstream master - https://git.io/JfLmK
OmniMancer has joined #symbiflow
borisnotes has joined #symbiflow
epony has quit [Read error: Connection reset by peer]
epony has joined #symbiflow
Bertl_zZ is now known as Bertl
<_whitenotifier-9> [symbiflow-arch-defs] kkumar23 opened issue #1443: Branch : Quicklogic : Install at custom location - https://git.io/JfLCK
<mithro> acomodi: When you get a moment, I would like to discuss stats and FPGA Tool Perf
<sf-slack> <acomodi> @mithro: Sure
<sf-slack> <acomodi> mithro: now is good for me
<ZirconiumX> And I'm around if it matters any, mithro
<tpb> Title: SymbiFlow FPGA Tool Performance (Xilinx Performance) - Google Docs (at docs.google.com)
<mithro> ZirconiumX: You do seem better at stats than myself, so would appreciate feedback too
<ZirconiumX> It's less about that, more about applying the little bit of existing knowledge I have on these things :P
<sf-slack> <acomodi> mithro: as far as I understand, we would need to add tests in tool-perf to iteratively check an algorithm (in this case the a PnR tool) to verify the assumptions
<sf-slack> <acomodi> for assumption 1: run a design X times and verify that the X results are the same (or at least are between an interval of acceptance)
<sf-slack> <acomodi> for assumption 2: run a design X times and for each run change the seed and verify that the output changes as well
<sf-slack> <acomodi> I assume that the output in this case would be CPD, correct?
<ZirconiumX> mithro: it's "MARLANN" as in "multiply-accumulate and rectified-linear accelerator for neural network"
borisnotes has quit [Ping timeout: 260 seconds]
gsmecher has joined #symbiflow
<mithro> acomodi: So for assumption 1 -- we need to run the tool X times with identical input and check we get identical output
<mithro> acomodi: For assumption 2 -- I think we need to run a whole bunch of runs until we have a high level of confidence
<mithro> Test 3 is the big one
<ZirconiumX> mithro: Can I suggest an alternative alternative test 3?
<mithro> ZirconiumX: Sure
<mithro> ZirconiumX: the discussion section is trying to explain my high level thinking around why test 3 is a reasonable thing to think about
<ZirconiumX> Assumption 3: Place-and-route results have approximately normally distributed maximum frequencies.
<ZirconiumX> (this holds true in my experience)
<ZirconiumX> Alt. Alt. Test 3: Determine the mean of the normal distribution of maximum frequencies. This will tell you how likely it is that a tool will produce a design that satisfies the timing constraints.
<ZirconiumX> My assumption here is that the seed is effectively a random input, and related seeds do not produce related results.
<ZirconiumX> Which seems a much safer assumption than the thinking you outline
<ZirconiumX> mithro: ^
rw1nkler has quit [Quit: WeeChat 1.9.1]
wavedrom has joined #symbiflow
borisnotes has joined #symbiflow
<mithro> ZirconiumX: I believe you that there is a distribution of maximum frequencies - I'm not sure that I believe it is normally distributed
<mithro> I wouldn't be surprised to see binomial type distributions
<ZirconiumX> I think Fmax should be treated as continuous
<ZirconiumX> Because there's no inherent yes/no here
<ZirconiumX> So binomial is definitely the wrong one here
citypw has quit [Ping timeout: 240 seconds]
<mithro> ZirconiumX: I've definitely seen designs where the optimizer gets stuck with fmax distributed around two seperate fmax values
<ZirconiumX> That would be bimodal, not binomial :P
<tpb> Title: ABC Fmax distributions - Google Sheets (at docs.google.com)
<ZirconiumX> Here's some data I have to hand
<ZirconiumX> You can definitely see normals in there
<ZirconiumX> Perhaps it's more like a mixture model
<mithro> ZirconiumX: yes, I have definitely seen normal distributions around an fmax
<litghost> None of those look like Gaussian distributions? I'd expect a normality test to return poor results on all of them?
tcal_ has joined #symbiflow
<ZirconiumX> I'm not entirely sure how to do a normality test here
<ZirconiumX> And sadly I can't overlay plots
tcal_ has quit [Remote host closed the connection]
tcal_ has joined #symbiflow
burgerchamp has joined #symbiflow
burgerchamp has quit [Remote host closed the connection]
<hackerfoo> ZirconiumX: Colab (Jupyter + Python) is much better at generating complex plots: https://colab.research.google.com/drive/1fZ2j40GhPJHU35KogyPTcomyff2Fxmnl
<ZirconiumX> Ah, I see
<hackerfoo> Unrelated: I'm seeing about 800GB memory usage for 80 runs of VPR for the Titan tests, so plan on 10G per run for tests, CI, etc.
<hackerfoo> I had to switch to m1-megamem-96 to make effective use of the cores.
<litghost> hackerfoo: Most of that memory is just the gaussian blur test
<litghost> hackerfoo: Are you running with titan nightly or weekly?
<hackerfoo> It's not e.g. LU230 is at 17GB right now.
<hackerfoo> I'm running nightly, but with various flag settings.
<hackerfoo> There isn't just one offender, there are many designs that are well over 10GB.
<hackerfoo> The ones that use the most memory also tend to run longer.
tcal_ has quit [Ping timeout: 265 seconds]
<hackerfoo> I think using megamem (there's only one) is a good idea, because ultramem is too much, and with highmem you can only safely use about half the cores.
<hackerfoo> I'm still only running 80 threads per machine, but I'm going to do 95 (leaving one for other tasks) next time.
<hackerfoo> For 950 threads total.
syed has joined #symbiflow
syed has quit [Quit: Leaving...]
OmniMancer has quit [Quit: Leaving.]
adjtm_ has joined #symbiflow
borisnotes has quit [Read error: Connection reset by peer]
borisnotes has joined #symbiflow
QDX45 has quit [Remote host closed the connection]
QDX45 has joined #symbiflow
adjtm has quit [Ping timeout: 256 seconds]
LisaMarie has joined #symbiflow
LisaMarie has quit [Client Quit]
adjtm_ has quit [Remote host closed the connection]
adjtm has joined #symbiflow
tcal_ has joined #symbiflow
tcal_ has quit [Remote host closed the connection]
tcal_ has joined #symbiflow
tcal_ has quit [Ping timeout: 240 seconds]
tcal_ has joined #symbiflow
tcal_ has quit [Ping timeout: 272 seconds]
AkiraMay has joined #symbiflow
AkiraMay has quit [Client Quit]
tcal_ has joined #symbiflow
* hackerfoo uploaded an image: Screen Shot 2020-04-24 at 2.56.03 PM.png (83KB) < http://sandbox.hackerfoo.com:8008/_matrix/media/v1/download/sandbox.hackerfoo.com/RAlketALySXsyDObkjGmFWFE >
<hackerfoo> Nice level load at ~80% (that's ten machines.) The dip is from switching machine types.
tcal_ has quit [Remote host closed the connection]
boris_ has joined #symbiflow
borisnotes has quit [Ping timeout: 250 seconds]
boris_ has quit [Ping timeout: 240 seconds]