<gruetzkopf>
also not a good combo: strapping pins and auto-dir level shifters
noobineer has joined ##openfpga
<pie_>
i think auto level shifters got dissed here a couple days ago
<pie_>
awygle, wow you have such handsome cats
<pie_>
also, how could you not make a Kicat pun
<Zorix>
computer/technical people do love their cats
noobineer has quit [Ping timeout: 240 seconds]
GenTooMan has quit [Quit: Leaving]
<awygle>
pie_: they're good boys <3
<rqou>
huh, i just noticed that rust doesn't support messing with the floating point environment
<rqou>
(i don't need this for my use case, but still)
<rqou>
this makes rust pretty unusable for "scientific" stuff where this matters
<whitequark>
lol
<sorear>
*llvm* still doesn't support fpenv
<whitequark>
neither does clang
<whitequark>
i mean
<whitequark>
you can do it but the optimizer doesn't know about it
<sorear>
#pragma STDC FENV_ACCESS on is still broken in clang after N years
<rqou>
see, another point in favor of gcc :P
<sorear>
messing with fenv.h outside of a pragma block is explictly UB
<sorear>
since clang aborts on the pragma, it conforms :p
<whitequark>
oh
<whitequark>
lol
<rqou>
lolol
<rqou>
<troll>use g77</troll>
<rqou>
but actually though
<whitequark>
the problem is that LLVM IR doesn't model fenv
<whitequark>
there's been some discussion on llvm-dev about implementing it but it's a very invasive change
pie__ has joined ##openfpga
<whitequark>
i agree that it's a significant issue
<rqou>
oh hm it's just called f77
pie_ has quit [Read error: Connection reset by peer]
<rqou>
they didn't add a frivolous 'g' to this one
<sorear>
meanwhile Java and JavaScript don't support fenv.h at all and Kahan is *mad*
<rqou>
so serious question, what do "scientific" people do? use gcc/f77?
<rqou>
ping uovo
<whitequark>
numba lol
<rqou>
"NumPy-aware optimizing compiler"
<sorear>
it's kind of surprising just how much of the linux ecosystem indirectly depends on LAPACK
<rqou>
does python have fenv?
<rqou>
sorear: example?
<whitequark>
python doesn't
<whitequark>
but numpy uses it anyway
<rqou>
wat
<rqou>
so it can subtly break things?
<whitequark>
I'm not actually sure
<whitequark>
yyyyyep
<rqou>
meh, rust should just state that they always enable "fun safe math optimizations" and get away with anything lol :P :P
<whitequark>
oh nvm
<whitequark>
it only checks for exception flags
<whitequark>
so no, it doesn't let you mess with fenv
<whitequark>
guess it isn't so essential to scientific computing after all
<whitequark>
people on stackoverflow suggest using ctypes to directly poke fenv
<rqou>
lool
<whitequark>
alternatively
<whitequark>
they suggest using gmp
<rqou>
eww
<whitequark>
which... is *a* solution.
<whitequark>
technically speaking.
<rqou>
so whitequark, what if i purposely want to cause a certain russian-made windows debugger that is popular in the "warez/crackz scene" to crash hard? :P
<rqou>
(which is actually hilariously enough a borland delphi/c++ runtime bug)
<whitequark>
... which debugger
<whitequark>
i have no idea
<rqou>
ollydbg
<whitequark>
oh, it's written in delphi? lol
<whitequark>
wait
<whitequark>
it's russian?
<rqou>
is it?
<whitequark>
i have no idea
<rqou>
the author's name is Oleh Yuschuk
<awygle>
wtf is a "floating point environment"
<awygle>
I'm scared
<rqou>
anyways, the bug was that if you tried to convert an x87 float80 to a string but the mantissa was all ones
<rqou>
and the exponent was in a certain range
<rqou>
the runtime assumed that this would not overflow, but it actually does
<rqou>
and exceptions are enabled and not handled, so the debugger crashes
<rqou>
themida used to use this back in the day
<whitequark>
"Oleh"?
<rqou>
as anti-debug
<whitequark>
that's not really a russian name
<whitequark>
slavic yes
<rqou>
hmm, maybe i'm wrong
<rqou>
apparently he also lives in germany
<whitequark>
oh
<rqou>
for maximum bonus points though, this ollydbg crash can actually crash an entire stack of debuggers
<whitequark>
that's just a weird romanization
<rqou>
if you e.g. load ollydbg in ollydbg to try and debug why it crashed
<whitequark>
lol
<rqou>
the outer ollydbg will try to print the x87 registers and itself crash
<rqou>
whitequark: ok, i just checked and supposedly it's a bug in the borland c++ runtime, not delphi
<rqou>
you can test your own code with the value 0x403D FFFFFFFF FFFFFFFF
<rqou>
also apparently spotify used/used to use this as anti-debug as well
<rqou>
also a huge wtf: a lot of the old websites i used to reference back in my previous life as a "1337 h4x0r" have since disappeared
<rqou>
no wonder i see people complaining about how short internet memory timespans are
balrog has quit [Ping timeout: 264 seconds]
diadatp has joined ##openfpga
balrog has joined ##openfpga
pie__ has quit [Read error: Connection reset by peer]
<rqou>
do you want me to run your proposed designs past my father (who has shipped quite a few different telco products)?
<awygle>
first thought: "oh that's much better" second thought: "he just hid all the rats"
<azonenberg>
rqou: this line card is pretty generic, it's just 8x DP83867 + magnetics + 2x4 RJ45
<azonenberg>
going out to a samtec q-strip with 8x SGMII plus MDIO, I2C management, and power
<azonenberg>
awygle: lol of course i hid the rats, if i turned them on you wouldnt even see the components
<rqou>
eh, my father has been burned by a design like that before :P
<azonenberg>
rqou: oh?
<azonenberg>
what gotchas am i missing? i crunched numbers for the power consumption already
<azonenberg>
the whole switch fabric will all be on the fpga
<rqou>
in his case it was "we want to add XYZ queueing/management/etc. near the line card, but the line card is dumb with no FPGA on it"
<awygle>
azonenberg: have you done the mechanical design to come up with that form factor?
<azonenberg>
So, in my case the line card doesnt have any switching whatsoever
<awygle>
or just piicked something that looked good enough lol
<rqou>
in his case the line card doesn't do switching either
<azonenberg>
awygle: Width is defined by the size of 2x4 RJ45 + mounting holes
<rqou>
but a bunch of random glue happened on the line card
<azonenberg>
Depth is flexible and one of the reasons i havent put on the backplane connection yet
<awygle>
ah
<azonenberg>
I was going to hold off on fabbing it, and not actually do the final tapeout until i had the brain done
<azonenberg>
So i could make them the same depth
<azonenberg>
Exact geometry of the backplane is tbd
<awygle>
man, those magnetics make it ugly
<rqou>
basically there was a "GigE subscriber" line card that had lots of stuff including an fpga on it, and a "GigE trunk" line card that was dumb and had only a phy
<azonenberg>
But by doing the layout now i can get a minimum depth
<rqou>
and then customers started requesting features that required glue on the "trunk" line cards
<azonenberg>
Yes, i dont see a real alternative though
<azonenberg>
rqou: i see
<awygle>
if i looked at that i would start looking for magjacks
<azonenberg>
awygle: i did
<awygle>
do they do 2x4 magjacks?
<rqou>
and then they had to make a "GigE subscriber/trunk" card that worked in both positions
<awygle>
or 1x4?
<azonenberg>
i looked into it
<rqou>
because the "GigE subscriber" card didn't work in a "trunk" slot because of stupid reasons
<azonenberg>
they do 2x4 magjacks but none with the separate center taps that are needed for the dp83867
<azonenberg>
they'd work fine with the ksz9031 but that's not sgmii
<awygle>
ah
<azonenberg>
the dp83867 has differnet common mode voltages on some pairs in 10/100 mode
<azonenberg>
So it'll fry if you use those jacks
<azonenberg>
I spent a while looking and was unable to find any compatible massively multiport magjacks, best i found was a 1x2
<awygle>
that makes sense, i was speccing ksz9031 when i was looking
<azonenberg>
yeah that was my original plan
<azonenberg>
but it would have ~doubled the trace count on the backplane and prevented me from using a fgg484 fpga
<azonenberg>
fgg676 cannot do 10g, so i'd have had to go to fbg676
<azonenberg>
And then i would have been looking at both a more expensive fpga and probably another pcb layer
<awygle>
yup
<awygle>
i had done ~no math at that point
<azonenberg>
oh and, magjack is significantly pricier than rj45 + discrete magnetics in multiport
<azonenberg>
yeah i spent a while on the engineering for this and the original design with the artix was done with tradeoffs that were valid at that time
<azonenberg>
i.e. before kintex was specced to do 10g in fbg484
<azonenberg>
That changed everything
<rqou>
azonenberg: is switching supposed to live on the backplane or is the backplane dumb?
<rqou>
also, how scalable is the backplane design supposed to be (in terms of bandwidth)
<azonenberg>
rqou: backplane is going to be either completely passive, dumb buffers, or repeaters with CDR
<azonenberg>
Depending on not-yet-performed SI analysis
<azonenberg>
No switching
<azonenberg>
All switching is on the line card
<azonenberg>
on the brain card*
<awygle>
heyy i performed one :p
<awygle>
you just don't believe it
<azonenberg>
awygle: you didnt do jitter
<awygle>
true
<azonenberg>
just amplitude
<rqou>
awygle: want to math "rgmii on ice40" for me?
<azonenberg>
rqou: anyway the brain card will have a single fpga, with the entire switch fabric living there
<awygle>
but i contest that as long as you don't add a bunch of unnecessary dispersion it'll be fine
<awygle>
rqou: what math do you need?
<azonenberg>
awygle: which is why i plan to do a passive backplane, but have power available should i need to add buffers in a respin
<azonenberg>
rqou: The backplane will just be wires between q-strips
<azonenberg>
unless i add buffers
<rqou>
wait, with multiple drops?
<azonenberg>
Then the line cards are just phys and magnetics plus PoL power supply
<azonenberg>
rqou: no, point to point
<awygle>
fat Q in, skinny Qs out
<azonenberg>
TBD connector on host to QTH-040-DP on each line card
<rqou>
ah ok
<azonenberg>
total of 24 SGMII TX pairs, 24 RX pairs, power, ground, one MDIO bus to each line card, one reset to each line card
<rqou>
i immediately see a problem if you want "telco-grade"
<azonenberg>
Clock generated on backplane, no global clock source
<azonenberg>
oh?
<rqou>
hot failover
<rqou>
of the master
<rqou>
since you only have one :P
<rqou>
but usually datacenters don't do it this way and rely much more on redundant paths or just outright redundant machines instead
<azonenberg>
Redundancy within one switch is not a design goal
<azonenberg>
If i needed it, i'd have redundant switch paths
<azonenberg>
awygle: I plan to do clock recovery via oversampling on the fpga
<azonenberg>
So just one diff pair each way a la 1000baseX
<awygle>
oh
<azonenberg>
The dp83867 does not have a tx clock input, it always does clock recovery
<awygle>
i misread total
<awygle>
i get it
<azonenberg>
it can supply a rx clock if you ask for one
<awygle>
i thought 24 lanes per line card
<awygle>
hence, three
<azonenberg>
I do not plan to exercise that option at this time
<azonenberg>
oh, no
<azonenberg>
8 lanes in and 8 out per line card
<awygle>
yup
<azonenberg>
I'm using qth-040-dp because it just *barely* wont fit in qth-020
<azonenberg>
it has 10 diff pairs on each side of the connector
<azonenberg>
Which is not enough for power, ground, mdio, and i2c plus 8 sgmii
<azonenberg>
So now i have 20 diffpairs and i'm using like 14 or something
<azonenberg>
(on each side)
<azonenberg>
The pairs of a qth-dp can be used as 50 ohm single ended signals afaik
<awygle>
yes
<azonenberg>
even if there's a bit of crosstalk, it's i2c and mdio
<azonenberg>
not exactly fast
<awygle>
(i have done this)
<azonenberg>
afaik a qth-dp is just a depopulated qth
<awygle>
yes
<azonenberg>
So it should have the same single ended impedance
<awygle>
which is why we always just used the qth and grounded a bunch of grounds :p
<azonenberg>
they just removed other signals to provide better coupling within pairs
<azonenberg>
Yeah i've done that too
<azonenberg>
i wanted to try the qth-dp option here
<awygle>
yeah s'fair
<azonenberg>
anyway I am not providing interrupt lines or gpios or any other phy signals
<azonenberg>
one mdio bus with phys addressed 0...7
<azonenberg>
one hard reset for all phys, if you want to reset any one you have to do it over mdio
<azonenberg>
one i2c bus for sensors
<azonenberg>
then 8 lanes of sgmii
<awygle>
how many power/ground conductors?
<azonenberg>
As of now my line card has five sensors... one INA226 on each of the 2.5, 1.8, 1.0V power rails, one PCB temperature sensor, and one el cheapo ADC to monitor the thermal diode on the LTC3374 smps
<awygle>
oh you're running an LTC per line card? not central power?
<azonenberg>
1.0V power on the backplane would have massive drop
<azonenberg>
I step the 12V system power down to 5 on the brain card
<awygle>
i guess staying stable over that much load swing might be hard
<azonenberg>
then 5V on the backplane and a LTC3374 on each card to generate the supply voltages
<azonenberg>
oh and then a 3.3V LDO for powering the sensors themselves, plus i think the indicator LEDs
<awygle>
merr rip linear
<awygle>
okay so how many 5V power pins / ground pins?
<azonenberg>
My calculations say each line card is going to be (worst case with everything maxed out) going to be 0.9A @ 1V, 0.61A @ 1.8V, 0.72A @ 2.5V, 0.08A @ 3.3V
<azonenberg>
This comes out to about 4W per card
<azonenberg>
Or a tad under 1A @ 5V after conversion losses
<awygle>
wait a sec qth-040 doesn't exist
<awygle>
030, 060, 090
<azonenberg>
So four pins seems like plenty
<azonenberg>
awygle: that's the normal, not the diffpair version
<azonenberg>
qth-040-01-l-d-dp-a
<awygle>
oh ok
<azonenberg>
it's a depopulated qth-060
<awygle>
i didn't realize it was a separate family
<azonenberg>
awygle: right now my pinout has 1-2-3-4 as 5V power, 5-6-7-8 as power return (in addition to the signal ground plane which provides plenty of grounding capacity)
<awygle>
gotta love power blades
<azonenberg>
then on the same half of the connector i have mdio, reset, i2c, power good out from the LTC, enables for all three rails (sequencing is done by the FPGA, the line card boots up in sleep)
<azonenberg>
plus i pinned out jtag on the PHYs just in case i wanted to use it for some kind of bist
<azonenberg>
Then i put all the high speed stuff on the other half of the connector
<azonenberg>
As far from the high current power bus as i could go
<azonenberg>
awygle: sound like a solid design? anything you'd change?
<azonenberg>
(as a reminder the choice of 8 ports per line card was to provide the standard complement of 24 ports with exactly one oshpark order)
<awygle>
sounds pretty good. i'd probably run 5V on the other blade and put the signals routed with the 5V reference plane on that half, if that makes sense
<azonenberg>
Hmm, i'd have to tweak my qth-dp footprint to have the blades be separate pin numbers
<azonenberg>
But i can do that
<awygle>
lemme double check the samtec SI notes and see if that's actually a good idea lol
<awygle>
i think it is though
<azonenberg>
it shouldnt change anything if the sgmii stuff is still ref'd to ground
<azonenberg>
the i2c/mdio is slow enough ti shouldn't care what it's ref'd to
<azonenberg>
all i need to do is put some caps from 5V to 3.3V (the vccio rail for all the slow logic) and 1.8V (which is i think what the jtag runs at, if we use it) to provide a nice AC return path
<azonenberg>
for EMC reasons, it shouldnt matter for SI
<azonenberg>
I also have to figure out the grounding strategy for the rj45 shield
<azonenberg>
right now i have that flagged as a todo
<azonenberg>
awygle: also the 2 kV cap to ground on the media-side center tap
<awygle>
well yeah, definitely, but you can avoid the capacitor hop since half your signals are ref'd to 5V on the board by necessity
<azonenberg>
why?
<awygle>
unless you're doing double ground planes internally i guess
<awygle>
i was assuming SPGS
<azonenberg>
the 5V plane on the pcb should only exist right around the q-strip and psu area
<azonenberg>
the rest of the plane is going to be 1.8 (primary digital io rail) with islands of 2.5 and 1.0 for analog/core rails
<azonenberg>
So there is no good path from 5V to the other power rails w/o going through a cap from $rail to ground, then all the way over to the smps, then back up to 5V
<azonenberg>
Hence the caps
<awygle>
assuming L2 is ground, whatever's on L4 will ref to L3
<awygle>
primarily
<azonenberg>
yes but L3 will not be primarily 5V :)
<awygle>
that's fair i guess
<awygle>
still probably better to hop from 5V to 1.8V than from ground to 5V to 1.8V
<awygle>
or whatever
<awygle>
but it should be ~irrelevant
<azonenberg>
yes exactly
<azonenberg>
its just an emc tweak
<azonenberg>
for i2c/jtag/mdio bit rates the SI impact should be \epsilon
<awygle>
is sgmii ac or dc coupled?
<azonenberg>
AC, there's 0.1 uf caps on every line
<azonenberg>
i'm putting all of the caps on the module, for both directions
<awygle>
k
<azonenberg>
on the line card*
<azonenberg>
hmmmm
<azonenberg>
how horrible an idea is it for a MLCC to straddle a baseT diffpair for another interface? :p
<awygle>
jiiiiiiiiiiiiiii
* azonenberg
reroutes pair
<awygle>
:p
bitd has joined ##openfpga
<awygle>
it's actually probably not that bad, tbh
<awygle>
unless there's something nasty in the cap
<awygle>
like USB
<awygle>
(not how usb works)
<azonenberg>
This is the 2kV ESD (?) cap from the center tap resistors to ground on the TX side of this port
<azonenberg>
so in normal operation should see essentially zero voltage
<awygle>
hm. yeah actually that should be fine, you'll see a little bump in your impedance and that's all
<awygle>
still not Best Practices though :p
<azonenberg>
i already rerouted it
* awygle
idly muses about whether that'd show up on a TDR
<azonenberg>
depends on how good your tdr is?
<awygle>
and how good the rest of your SUT is
<awygle>
and whether you go to heroic lengths to compensate the impedance with ground plane cutouts (which you could, if you hated yourself)
<azonenberg>
lol
<azonenberg>
i'm half tempted to do that on this board because why not
<azonenberg>
under the sgmii coupling caps
<awygle>
oh yeah that's not unreasonable
<awygle>
but it's a different situation
<azonenberg>
it's also massive overkill for 1.25 Gbps
<awygle>
yeah but you don't like my SI :p
eduardo__ has joined ##openfpga
eduardo_ has quit [Ping timeout: 264 seconds]
<azonenberg>
awygle: i didnt say i didn't like it
<azonenberg>
i said i was unconvinced
<awygle>
i know, i'm just giving you shit
<awygle>
i look forward to running SGMII over three of your backplanes and having it work
<azonenberg>
Lol
<awygle>
just to be totally clear, i don't take your lack of convincement personally, although it is very amusing to me to pretend that i do
<azonenberg>
yeah, and i'm not saying it wont work
<azonenberg>
Just saying until i see jitter analysis i am not willing to bet the success of my project on it
<azonenberg>
And thus, while i am planning to deploy a passive backplane initially, i'm providing a fallback capability for active buffers if needed
<awygle>
btw where on the datasheet are you deriving the need for the center tap?
<azonenberg>
"Each center tap of the magnetics should be independently decoupled to ground"
<awygle>
ah thanks
<azonenberg>
So, not in the actual chip datasheet
<azonenberg>
but a good thing to read
diadatp has quit [Ping timeout: 256 seconds]
Bike has quit [Quit: Lost terminal]
pie_ has quit [Ping timeout: 240 seconds]
<awygle>
hm. does xilinx publish ibis (or ibis-ami) models for their transceivers?
<awygle>
answer - yes.
<awygle>
but they're in the "lounge" whatever that means
<awygle>
and it looks like they have a strict "no casuals" policy
<azonenberg>
awygle: interesting
<azonenberg>
i can probably get them, though probably couldn't share
<azonenberg>
having company names to throw at things sometimes helps
<awygle>
yeah i think there was a confidentiality clause in the license agreement i skimmed
<awygle>
i wonder how relevant it would be given that the CDR is in fabric(?)
<awygle>
i could run something with the TI files and "eh, it's a receiver" on the other end lol
<awygle>
but it's not really worth it, just a thought i had
<azonenberg>
yeah gimme a sec
diadatp has joined ##openfpga
<azonenberg>
So, it looks like the virtex6 ibis-ami models are public?
<azonenberg>
that's likely close-ish
<azonenberg>
also have you tried using ibiswriter?
<awygle>
mmmnope
<awygle>
oh you can generate your own ibis model based on your i/o configuration or something/
<awygle>
that's p. cool
<azonenberg>
Yeah
<azonenberg>
Not sure if it can do ibis-ami though
<awygle>
i wonder if pybert can take regular ibis
<awygle>
i think it can
<awygle>
hm, maybe not
<azonenberg>
virtex-6 ibis-ami can be downloaded with a click-through license
<azonenberg>
So if you have an ibis-ami simulator handy, i doubt the virtex-6 GTX has worse performance than the kintex-7
<awygle>
yeah but we're not using the gtx for sgmii
<awygle>
and even if they gave us ibis files for regular i/o, the cdr is in fabric so idk how useful it'd be
<awygle>
i've never tried to do something like that
<awygle>
i'll ask $BETTER_AT_THIS_THAN_ME at work on monday
<azonenberg>
well if it's regular io
<azonenberg>
you can use ibiswriter
<azonenberg>
to generate a normal ibis model
<azonenberg>
i thought you were interested in 10g for some reason
* awygle
increasingly wants to design a serdes or something else analog-y
<azonenberg>
So, cnlohr bitbanged 10baseT with an attiny
<azonenberg>
I bitbanged 100baseTX with an FPGA and a dozen resistors
<azonenberg>
the next trophy in line is obvious
<awygle>
haha
<awygle>
fair
<awygle>
i really meant on an ASIC though
<awygle>
but i'll think about it
<azonenberg>
I will gladly buy a case of $DRINK_OF_CHOICE for any hardware hacker 1337 enough to bitbang 1000baseT
Lord_Nightmare has quit [Excess Flood]
<azonenberg>
Related, i wonder if i could add an equalizer and/or adaptive thresholding to TRAGICLASER
<azonenberg>
And how many more pins i'd need to do it
Lord_Nightmare has joined ##openfpga
<azonenberg>
Adaptive thresholding is probably doable by having the vref voltage divider have several resistors, and drive them via GPIOs
<azonenberg>
so pin X -> r1 -> vref -> r2 -> gronud
<azonenberg>
pin Y -> r1a -> vref -> r2 -> ground
<azonenberg>
etc
<azonenberg>
this would use more pins and more resistors, but not use more LVDS channels or increase capacitive loading on the signal
<azonenberg>
Equalization in bitbang is HARD because, unless you allow external analog components, you need a several-bit ADC
<azonenberg>
And since you have to sample pretty fast you have to use a flash ADC since none of the other topologies are fast enough
<azonenberg>
Which means 2^n reference voltages and, more importantly the capacitance of 2^n LVDS Input buffers on your signal
<azonenberg>
I don't know if there's a way to do it with good performance if the ROE don't allow anything but an FPGA and passives
<awygle>
yeah
<azonenberg>
if you could have an external opamp or two, all bets are off - and i might try doing that for fun
<azonenberg>
But the whole point of tragiclaser was to undercut the cost of an asic phy
<awygle>
i feel like allowing a 2N3904 or two isn't unreasonable
<azonenberg>
(as well as to learn more)
<awygle>
maybe something slightly faster
<azonenberg>
can those do hundred of MHz b/w?
<azonenberg>
exactly
<awygle>
ft is 270 MHz
<awygle>
at 100 MHz
<azonenberg>
thats fast enough for 100baseT, i think? it's 125 Mbd but the fundamental of the MLT-3 is only 31.25 MHz
<azonenberg>
Anyway while i would love to try my hand at making a more sophisticated serdes, maybe even something with DFE
<azonenberg>
it would have to be a separate project with expanded rules
<azonenberg>
that allow, as you mentioned, at least a few external BJTs or something
<azonenberg>
or even opamps/adcs/dacs
<awygle>
yeah you could probably swing 100baseT on a 3904
<awygle>
slightly iffy, you'd have to be slightly lever
<awygle>
*clever
<azonenberg>
actually
<azonenberg>
i think you could improvise a delta-sigma dac
<azonenberg>
for adaptive thresholding
<azonenberg>
with an FPGA GPIO, big capacitor, and lowpass filter
<azonenberg>
then just adjust the dac output to tweak the decision points in the Y axis
<azonenberg>
getting noise low enough to get good results might be a little tricky but you shouldnt have to change thresholds often, you migth even just do it as part of a training sequence when the link comes up (i.e. adjusting for different lengths of cable)
<azonenberg>
So if you just LPF the heck out of the dac it could work
<awygle>
did i link that thing in here?
<azonenberg>
Equalization would be trickier
<azonenberg>
what thing?
<azonenberg>
i know i've seen people do d-s with fpga ios for audio
<azonenberg>
But i dont know if you could get low enough noise to use it in a serdes
<azonenberg>
I think if you traded slew rate for noise, you could
<azonenberg>
Equalization would be trickier unless you had either a multibit adc or an external analog mixer and gain stage
<azonenberg>
Cool, but way too slow for our needs
<awygle>
yeah, clever though
<azonenberg>
Yeah
<azonenberg>
my adc was a 500 Msa/s 1.5 bit flash adc :p
<azonenberg>
veeeery different set of parametrs
<awygle>
you can spend like, 30 cents per unit and get transistors that will handle gigabit
diadatp has quit [Ping timeout: 240 seconds]
<azonenberg>
hmm, that makes a discrete CTLE plausible then?
<awygle>
don't see why not
<awygle>
i haven't studied those in detail though
<azonenberg>
also, in general you're supposed to have voids in the plane layers under magnetics
<azonenberg>
i guess to prevent unwanted coupling
<awygle>
yup
<azonenberg>
Thing is, i have signals going under the magnetics :p
<awygle>
it will pull your impedance
<awygle>
probably
<azonenberg>
i'm gonna try and see if i can reroute them
<awygle>
the major problem with gigabit of course is the bidirectional signalling
<azonenberg>
yeah i think that makes it impossible w/o external analog mixing
<azonenberg>
to do echo cancellation
<awygle>
mhm
<awygle>
oh fuck it's after midnight. i need to sleep lol
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
<azonenberg>
lol just noticed?
<azonenberg>
So i've managed to route all but one interface not under magnetics
<azonenberg>
this one i cant figure out how to avoid
mumptai has joined ##openfpga
diadatp has joined ##openfpga
user10032 has joined ##openfpga
diadatp has quit [Ping timeout: 268 seconds]
diadatp has joined ##openfpga
diadatp has quit [Ping timeout: 256 seconds]
rohitksingh has quit [Quit: Leaving.]
<rqou>
azonenberg: ping?
<azonenberg>
ack
<rqou>
so i actually did go and ask my father what he thinks you should have on your line cards
<rqou>
*) reserve a 9th sgmii pair, not for going to a front panel port but just in case you ever need to add a lot of smarts to a line card
<rqou>
you can use this extra pair as a high-bandwidth interface to any extra fpgas/CPUs that may show up on a line card
<rqou>
*) a set of open-collector wires shared between all line cards
<rqou>
used just in case you ever need a global signal and for some reason you can't get it signaled over the existing i2c/mdio
<rqou>
and finally *) some way for line cards to know what physical slot they are plugged in to
<rqou>
how much of this do you agree with?
<azonenberg>
1) there are two extra diffpairs each way on the high-speed end of the connector
<azonenberg>
that are unallocated
<azonenberg>
100 ohm impedance, and i guess i can pin them out on the backplane
<azonenberg>
But i have no plans to use it for the time being
<azonenberg>
2) I can't see that being needed, i already have a global reset
<azonenberg>
and a bunch of point to point status
<azonenberg>
like PGOOD, power rail enables per line card, etc
<azonenberg>
3) what would the point of this be?
<azonenberg>
the host FPGA can always tell them over I2C at boot time
<rqou>
2) my father said "add it anyways, it's saved their designs multiple times"
<rqou>
3) that is good enough
<azonenberg>
Since the links are all point to point
<azonenberg>
and it knows which interface the message went out on
<rqou>
yeah, that's good enough
<azonenberg>
I guess i can add a few reserved signals to the pinout
<rqou>
basically make sure at least one allows line cards to signal each other
<rqou>
not just line cards to signal the master which then has to signal the other cards
<azonenberg>
I don't see that ever being used
<azonenberg>
Because my architecture has the line cards being dumb, with all intelligence being in the master
<azonenberg>
they're basically "PHY on a stick"
<rqou>
my father basically said that all of their designs that started out that way eventually ended up with not-dumb line cards :P
<azonenberg>
the only reason i even have line cards instead of a monolithic design is that i can't fit a 19" wide PCB in my toaster oven
<azonenberg>
This architecture lets me have individual boards be sanely sized
<azonenberg>
LATENTORANGE, LATENTYELLOW, etc will be higher end designs that may well have more complex architecture and smarter line cards
<azonenberg>
LATENTRED is purely a layer 2 edge switch
<rqou>
hmm, so it sounds like you don't actually intend to take advantage of the modularity very much
<azonenberg>
No, i don't
<rqou>
then maybe you don't have to do anything
<azonenberg>
That was never a goal
<rqou>
just consider these suggestions for the higher-end designs?
<azonenberg>
Tell your dad I'm trying to replace my cisco 2970G's with something that has 10G uplinks and is missing all of the fancy features they have that i don't use :p
<rqou>
lol ok
<rqou>
he's used to building stuff for telcos
<rqou>
which tend to have tons of random features
<azonenberg>
yeah this is not a core switch, it's an access layer
<azonenberg>
LATENTORANGE will be based on a large ultrascale/ultrascale+
<azonenberg>
and have several dozen 10G ports
<rqou>
telco "access layer" stuff still tends to have dumb features
<azonenberg>
That will probably be layer 3 capable and have a lot more feature
<rqou>
how else can they e.g. throttle everybody to _just_ the amount they paid for? :P :P :P
<azonenberg>
lol
<azonenberg>
meanwhile i want an ISP that gives me an uncapped pipe and bills me 95th percentile
<azonenberg>
like a datacenter
<azonenberg>
Anyway LATENTRED will have 802.1q, ssh management via a physically dedicated interface (no bridging from management port to fabric, by design), rs232 management for IP config and initial bringup
<azonenberg>
maybe 802.3ad eventually
diadatp has joined ##openfpga
<azonenberg>
ability to turn interfaces on and off, view stats, force interfaces to a given speed
<rqou>
no in-band magic management vlans for you?
<azonenberg>
Full duplex only, half duplex is intentionally unsupported
<azonenberg>
it wont even advertise half capable
<azonenberg>
Then a few fun debug capabilities once i get time to implement them
<azonenberg>
not just port mirroring
rohitksingh has joined ##openfpga
<azonenberg>
i want the ability to take data coming in a port, add a header to each frame with a cycle-accurate timestamp and a few other bits of metadata, and forward it out inside a UDP packet / TCP segment or something along those lines
<azonenberg>
This capture will be done either at layer 2 like a normal switch, or at the MAC layer including preamble and FCS
<azonenberg>
i may eventually support raw line coding captures as well, so you can see the raw 8b10b symbols etc
<azonenberg>
straight off the clock recovery block
scrts has quit [Ping timeout: 240 seconds]
scrts has joined ##openfpga
rohitksingh has quit [Read error: Connection reset by peer]
rohitksingh has joined ##openfpga
diadatp has quit [Ping timeout: 248 seconds]
Bike has joined ##openfpga
bitd has quit [Remote host closed the connection]
rohitksingh1 has joined ##openfpga
rohitksingh has quit [Read error: Connection reset by peer]
bitd has joined ##openfpga
rohitksingh1 has quit [Quit: Leaving.]
<rqou>
whitequark: is there a logging framework for rust that allows multiple loggers?
rohitksingh has joined ##openfpga
rohitksingh has quit [Read error: Connection reset by peer]
Lord_Nightmare has quit [Ping timeout: 248 seconds]
Lord_Nightmare has joined ##openfpga
mumptai has quit [Quit: Verlassend]
diadatp has joined ##openfpga
diadatp has quit [Ping timeout: 256 seconds]
rohitksingh has joined ##openfpga
bitd has quit [Ping timeout: 265 seconds]
mumptai has joined ##openfpga
rohitksingh has quit [Quit: Leaving.]
rohitksingh has joined ##openfpga
diadatp has joined ##openfpga
<whitequark>
rqou: slog
rohitksingh has quit [Quit: Leaving.]
diadatp has quit [Ping timeout: 248 seconds]
bitd has joined ##openfpga
diadatp has joined ##openfpga
pie_ has joined ##openfpga
mithro has quit [*.net *.split]
diadatp has quit [Ping timeout: 248 seconds]
<pie_>
gee thanks <azonenberg> So, not in the actual chip datasheet
diadatp has joined ##openfpga
diadatp has quit [Ping timeout: 256 seconds]
diadatp has joined ##openfpga
mithro has joined ##openfpga
anuejn has quit [Ping timeout: 255 seconds]
AlexDaniel` has quit [Ping timeout: 240 seconds]
jfng has quit [Ping timeout: 260 seconds]
indefini has quit [Ping timeout: 258 seconds]
nrossi has quit [Ping timeout: 258 seconds]
sielicki has quit [Ping timeout: 255 seconds]
pointfree1 has quit [Ping timeout: 276 seconds]
rohitksingh has joined ##openfpga
rohitksingh has quit [Client Quit]
rohitksingh has joined ##openfpga
pie_ has quit [Ping timeout: 248 seconds]
rohitksingh has quit [Quit: Leaving.]
pie_ has joined ##openfpga
<whitequark>
cr1901_modern: ping
pie__ has joined ##openfpga
pie_ has quit [Ping timeout: 255 seconds]
<cr1901_modern>
whitequark: pong, but I'm heading out in like 5 mins
ym has joined ##openfpga
* cr1901_modern
has to go, I'll check logs when I'm done driving