<pie_>
yet another reason functional programming is great \o/ :p
<pie_>
(cant really memoize if you dont have pure functions because output will depend on not only the function parameters yeah?)
* pie_
thinks he noobed this before in python ^
<jn__>
rust kind of has a sane version of assignment-inside-if() with if let
<rqou>
I've done `if ((ret = foobar()))` many times
<rqou>
am i evil?
<qu1j0t3>
pie_: I'm pretty convinced parsimony is a helpful engineerign principle yeah
<rqou>
also, azonenberg: I've found a huge quality problem with the xc2par flow for now
<rqou>
we're still not aggressive enough at inferring TFFs
<rqou>
apparently that ends up saving a ton of space
<lain>
rqou: that's about the only thing I consider relatively sane, re: assignment in if()
<lain>
anything more complex than assigning and checking a retval is a recipe for disaster imo
<whitequark>
pie_: you can totally memoize pure functions in python
<whitequark>
you can't do it *automatically* but memoization is applicable everywhere. c++ uses it a lot
<rqou>
yeah, I've done it many times with a magic decorator
<whitequark>
also even haskell has unsafePerformIO :p
<rqou>
(Berkeley really likes this trick)
<pie_>
whitequark, i mean i did it wrong
<Bike>
i remember seeing a cool memoization thing for fibonacci, and then they dropped it and did linear algebra instead and it was faster and took no memory
<rqou>
heh I've seen that trick
<qu1j0t3>
it's in SICP
<qu1j0t3>
Bike: non-naive Fib is O(1) space anyway. the algebra trick just takes time complexity to O(log n) instead of O(n). iirc
<rqou>
ok, maybe that's why I've seen it
<Bike>
sounds right
<rqou>
Berkeley's curriculum is SICP-derived
<Bike>
since you do the binary method on the exponentiation of the matrix or something
<cr1901_modern>
qu1j0t3: non-naive Fib is O(1) time too
<qu1j0t3>
yes
azonenberg_work has joined ##openfpga
<cr1901_modern>
there's a closed-form solution involving phi
<qu1j0t3>
heh
<qu1j0t3>
yes, but i mean specifically the recurrence impl.
<Bike>
it's closed form but you still have to exponentiate and now there's floats
<Bike>
i forget the O for addition chain exponentiation. is there one even
<qu1j0t3>
i refer to that as O(n)
<Bike>
maybe just log with a smaller constant
<rqou>
oh hey awygle, did you take the "old-school bh version" of 61A or the sneklang version?
* oeuf
wakes up
<oeuf>
floats!
oeuf is now known as egg|z|egg
<rqou>
lol
* rqou
pets egg|z|egg
<qu1j0t3>
if an egg|z|egg floats, it's, ... well, it's not good.
<Bike>
i know finding an optimal addition chain is np complete or at least a generalized version is np complete so Who Knows, but fuck that part
<Bike>
mhm looked up a paper and it goes right into genetic algorithms
<qu1j0t3>
lol
<rqou>
<egg|z|egg> is a float of 0.9999999998 due to rounding bugs :P
<Bike>
oh i just realized that's a bra ket joke
<egg|z|egg>
nah its' just that there is a smol probablity that I'm not actually asleep
<egg|z|egg>
(don't look at the fact that I'm typing you'll collapse me)
<Bike>
i wonder if tetration is as bullshit to optimize
<Bike>
i guess it is since in general you get ackermann which is bullshit
<azonenberg_work>
awygle: sooo it looks like doing this switch with an xc7k70t would be really tight
<azonenberg_work>
you'd need super tiny queues
<azonenberg_work>
Here's some tentative architectural numbers
<azonenberg_work>
1G ports: small input queue (4x small BRAM / 8KB) is 5.4 MTU-sized frames
<azonenberg_work>
that should be plenty since they get emptied fast
<azonenberg_work>
10G ports: large input queue (16x small BRAM / 32 KB) is 21.8 MTU-sized frames
<azonenberg_work>
on the output side, 10G ports get 2 BRAMs (basically enough to buffer a frame or two and maybe do really basic qos, but they empty at line rate so should never fill up)
<azonenberg_work>
1G exit queues are larger, 8 BRAMs (10.9 frames) since we can push data to them at 10 Gbps
<azonenberg_work>
and we dont want them to overflow too easily
<azonenberg_work>
I'm gonna do some simulations later on to adjust those queue numbers
<azonenberg_work>
But this comes out to 160 BRAMs for input queues and 200 for exit
<azonenberg_work>
The xc7k70t only has 270 blocks and that's before adding in the 64 BRAMs for the mac table
<azonenberg_work>
the xc7k160t, with 650 blocks, would still be 65% full
<azonenberg_work>
I'll try and see i fi can shrink the queues a bit once i've finished figuring out how to handle broad/multicasts
azonenberg_work has quit [Ping timeout: 256 seconds]
<azonenberg>
also, if microsoft buys them i'll probably move back to self-hosting :p
<daveshah>
Always have kept a gitlab server going partly for that reason
<azonenberg>
daveshah: i moved to self hosting after google code kicked the bucket
<azonenberg>
vowed to never use a third party host again
<azonenberg>
then kinda got pushed into using github by external forces
<azonenberg>
(also i had been using redmine internally which was a pain to maintain)
<daveshah>
I started using GitLab myself because a VPS was cheaper than a subscription to any service (considering other stuff like web hosting and email too) and that actually mattered when I was younger
<daveshah>
It worked very well for a few group projects
<azonenberg>
my nonpublic stuff is mostly hosted internally
<azonenberg>
with no web presence whatsoever
<azonenberg>
just git clone file:///nfs4/share/repos/project/
<daveshah>
Yeah, I've never worked on anything private enough to worry about that. Either Github/GitLab private project or the aforementioned server
<daveshah>
Which is now actually a Hetzner dedi
<azonenberg>
i also have a bunch of vps's i plan to shutter once i'm settled at the new place
<azonenberg>
i'm hosting in house on a bare metal system
<azonenberg>
email needs to migrate off godaddy, i havent figured out a plan for that
<azonenberg>
i've been using them way too long and just havent had the time to sit down and do something about it
<azonenberg>
The only infrastructure i plan to keep offsite is a dedicated system somewhere for offsite backuip
<azonenberg>
although i may change hosts for that
<rqou>
azonenberg: so, every lut input has _18_ choices
<rqou>
any guesses as to how the bits are encoded?
<rqou>
(no, i haven't started this part of the fuzzing yet)
<azonenberg>
Sixteen routing channels plus constant 0/1
<rqou>
what
<rqou>
no
<azonenberg>
18 routing channels?
<rqou>
yeah
<azonenberg>
Plus 0/1? or are those not available in fabric
<rqou>
not available
<rqou>
why do you need that? it's a lut, silly
<rqou>
36 local tracks (26 inputs + 10 feedback) where each input can get 18
<rqou>
*each lut input
<rqou>
but inputs a/b/c/d have a different selection of 18
<azonenberg>
one hot makes the most sense
<azonenberg>
but i guess we'll find out soon
<rqou>
that'd be huge
<daveshah>
I think some kind of dual one-hot is quite common
<daveshah>
That's what the ECP5 looks like
<rqou>
18*4*40=2880 out of 7168 bits per column
<azonenberg>
I could be wrong
<rqou>
dual one-hot?
<azonenberg>
but i thought that xilinx had one bit per pip
<azonenberg>
and it was just a question of which bit = which pip
<daveshah>
rqou: two one hot muxes cascaded
<azonenberg>
textbook one hot
<azonenberg>
oh, that would make sense
<rqou>
ah ok
<azonenberg>
i think coolrunner might do that in the larger parts
<azonenberg>
possibly even three
<rqou>
i've never heard that called "dual one-hot"
<rqou>
i would have expected that to mean one-cold