<balrog>
efnet, oftc, and espernet also got hit, at least
<balrog>
(in other words, someone hired a botnet)
<azonenberg>
Lol yeah
soylentyellow has quit [Read error: Connection reset by peer]
pie_ has quit [Ping timeout: 248 seconds]
pie_ has joined ##openfpga
pie_ has quit [Ping timeout: 240 seconds]
pie_ has joined ##openfpga
<eduardo__>
Anyone here interested in going to 34C3 and not having a ticket?
<eduardo__>
whitequark: It might be a necesity for me to open a company in HK. I will need to find out all the details. Great to know that there is an insider.
<rqou>
um, i'm technically an HK "insider" (but i don't know anything about opening a company)
pie_ has quit [Ping timeout: 256 seconds]
<rqou>
offtopic: i've been researching high vacuum systems and i'm surprised how simple they are to build in theory
eduardo_ has joined ##openfpga
eduardo__ has quit [Ping timeout: 240 seconds]
<rqou>
offtopic: whitequark: i just tried your "gpioke" tool on some of my machines and it failed on all three of them
<whitequark>
rqou: failed how
<whitequark>
where are the logs? "failed" says nothing
<whitequark>
libgen doesn't have many ieee standards afaik
<whitequark>
let me see
<whitequark>
what do you want exactly
<felix_>
sci-hub?
<qu1j0t3>
probably won't work for $$$$ standards documents.
<pie_>
"free IEEE standards"
<pie_>
" I can't figure out how"
<qu1j0t3>
oh, FREE.
<qu1j0t3>
i missed that word.
<pie_>
oh god i just made python take a huge valueerror dump on my screen
<qu1j0t3>
not hard to do
<qu1j0t3>
using python is dump city
wpwrak has joined ##openfpga
wpwrak has quit [Remote host closed the connection]
wpwrak has joined ##openfpga
<whitequark>
felix_: scihub is only for scientific papers
<felix_>
it had all ieee standards i wanted to have a look at
soylentyellow has joined ##openfpga
<pie_>
i meant to say libgen
<qu1j0t3>
you did say libgen.
<pie_>
oh ok
<pie_>
the advantage to slow code is iy makes you think more while trying to figure out hwats broken
<pie_>
the advantage is also the disadvantage...slow test
<pie_>
(i should write more unit tests)
soylentyellow has quit [Ping timeout: 255 seconds]
<pie_>
man i just hit some *really* weird bug
digshadow has quit [Ping timeout: 255 seconds]
soylentyellow has joined ##openfpga
soylentyellow has quit [Ping timeout: 265 seconds]
pie_ has quit [Remote host closed the connection]
pie_ has joined ##openfpga
soylentyellow has joined ##openfpga
Dolu has joined ##openfpga
<awygle>
Do 1g SFPs speak 1000BASE-X or (S/R)GMII or both?
<pie_>
i just got bit in the butt by python dynamic typing
cr1901_modern has quit [Read error: Connection reset by peer]
cr1901_modern has joined ##openfpga
pie_ has quit [Ping timeout: 265 seconds]
digshadow has joined ##openfpga
pie_ has joined ##openfpga
pie_ has quit [Quit: Leaving]
pie_ has joined ##openfpga
<pie_>
can anyone help me with some python? https://pastebin.com/6n2mf5Gm i need to run about 700,000 entries on this and it drastically slows down around 70000 (i have enough ram, each entry is a tiny object)
m_w has quit [Quit: leaving]
m_w has joined ##openfpga
<awygle>
pie_: dictionary resizing?
<pie_>
ah sorry, kinda lost track, someone gave me a pointer: the issue was using tuple() instead of list()
<pie_>
i was recreating tuples a lot and list appending worked much better
<pie_>
went from will never finish in this universe to practically instant
<awygle>
That was gonna be my next suggestion, object creation
<pie_>
i idd find it weird because dict insertion i *supposed* to be O(1)
<pie_>
and these arent complicated objects anyway, there practically just a bunch of pointers
<azonenberg>
awygle: SFPs are a pure layer 1 optical to electrical converter
<azonenberg>
The interface is one differential pair each way
<azonenberg>
Although it's theoretically possible to run SGMII over them, all standard ethernet equipment expects 1000BASE-X line coding
<azonenberg>
Which is basically just 8b/10b coded GMII data serialized out to 1.25 Gbps
<azonenberg>
then tx_en and tx_er are coded as special 8b/10b control characters
<azonenberg>
SGMII is very similar to 1000BASE-X but with a couple of key changes... it always runs at 1.25 Gbps but allows 10/100 mode by repeating bytes 100/10 times, and it has a source-synchronous clock next to the data that you can use instead of CDR
<azonenberg>
[R]GMII are parallel interfaces that are typically used to talk to copper PHY chips, however i am aware of at least one PHY made by Microsemi that bridges [R]GMII to SGMII/1000BASE-X
<azonenberg>
Which can be used to drive a SFP from a chip without GTPs
<awygle>
azonenberg: double checking, SGMII is 625MHz DDR, so an i/o being able to do SGMII doesn't necessarily mean that it can do 1000BASE-X, yes?
<azonenberg>
eh, they're essentially the same thing
<azonenberg>
625 DDR / 1250 SDR
<azonenberg>
The real difference is that SGMII has a reference clock supplied (you can use it or recover a clock from the data, your choice - but TX must provide it)
<azonenberg>
1000base-x does not have a refclk and requires clock recovery
<azonenberg>
So you need to be able to recover a clock to use it
<awygle>
I mean one toggles twice as fast the other, right? So it requires twice as fast of an I/O
<awygle>
?
<azonenberg>
Nope
<azonenberg>
625 MHz DDR = 1250 MT/s
<azonenberg>
1250 MHz SDR = 1250 MT/s
<azonenberg>
The only difference is the clocking structure at the input
<awygle>
I feel like I'm missing something basic here
<azonenberg>
The speed that matters in the IO isnt how fast the clock toggles
<azonenberg>
it's how fast the data toggles
<azonenberg>
If you have a clock at half the speed and you use both edges
<azonenberg>
the edges per second is the same
<awygle>
Ah ah OK I get where I went wrong now
<awygle>
The DDR flip flop is kind of a clock doubler. If you squint.
<azonenberg>
Literally the only difference between SDR data and DDR at half the clock rate
<azonenberg>
is the speed of the refernece clock
<azonenberg>
in fact, CoolRunner has a clock divider on the global clock tree that can be used to reduce powere
<awygle>
I am now up to speed. That was dumb of me.
<azonenberg>
power*
<azonenberg>
Make every internal flipflop DDR and halve the global clock speed
<azonenberg>
you just halved the power consumption of the clock tree
<azonenberg>
while maintaining the same system performance
<azonenberg>
They're one of the few FPGA/CPLD devices with internal DDR FFs (as opposed to only having DDR FFs in the IO buffers)
<azonenberg>
Going back to SGMII vs 1000baseX, from the TX side, the jitter requirements on 1000baseX are slightly tighter
<azonenberg>
So it's possible to meet SGMII spec and not quite 1000baseX
<awygle>
So if your I/O can handle 1000BASE-X/SGMII, the only reason to use an SGMII PHY is to use 1000BASE-T
<azonenberg>
Correct
<awygle>
Arright copy. Thanks.
<azonenberg>
Actually, let me rephrase
<azonenberg>
the only reason is to use BASE-T in general
<azonenberg>
Since SGMII can operate in 10/100 mode
<awygle>
Right
<azonenberg>
but 1000base-X is forced gig/full
<azonenberg>
1000base-X has autonegotiation but it's pretty useless, i dont know why its there
<azonenberg>
there are essentially no parameters to negotiate
<azonenberg>
i think you might be able to do half duplex but gig/half over two fibers is insane so nobody does it and i doubt most hardware supports it
<azonenberg>
in 10G iirc they removed autoneg from the optical link layer
<azonenberg>
And kept it in copper obvs since copper can negotiate multiple speeds
<awygle>
Sure
<awygle>
Any thoughts on 2.5G or 5G?
<azonenberg>
Halfway measures that might be nice to have but honestly i'd just roll out 10G
<azonenberg>
i suspect the copper PHYs will have the same fundamental flaws as 10G
<azonenberg>
massive DSP so power hungry with high latency
<azonenberg>
And probably will take a long time to reach commodity "you can buy PHYs in qty 1 on digikey with no NDA" status
<azonenberg>
meanwhile SFPs are available in qty 1 off the shelf
<azonenberg>
and SFP cages/connectors are cheap
<azonenberg>
I'm looking past 2.5 and 5 to the point that the next ethernet standard i'm seriously considering is 40G :p
<azonenberg>
When i renovate this house i'm putting four fibers to every room, to be used as multiple 1/10G lanes initially but 40G is very much on the table longer term
<azonenberg>
well four fiber pairs
<awygle>
That's basically what I concluded as well, thanks. Too bad 25G is so hard to do.
<azonenberg>
Also hmm
<azonenberg>
For only a little bit more per cable ($30 extra for a 20m cable) I could run twelve instead of eight fibers
<azonenberg>
that's enough for two duplex 10G links on top of a 40G link
<azonenberg>
I might do that to a few rooms so i have extra bandwidth to say my desk
<azonenberg>
a few dedicated p-p 10G links and then 40G to a switch
<awygle>
I can never quite talk myself into buying a rack...
pie_ has quit [Ping timeout: 248 seconds]
soylentyellow has quit [Ping timeout: 255 seconds]
<awygle>
I can't find SGMII in 802.3-2015... Hm.
<awygle>
Oh because it's a Cisco standard. Weird.
<azonenberg>
Yes SGMII is basically cisco's way of hooking phys up to their asics
<azonenberg>
but now other folks make it
<azonenberg>
And RGMII is i think a broadcom + marvell standard?
<azonenberg>
its an open industry spec freely available, but not in 802.3
<awygle>
There's a ti SGMII PHY looks like. Little more expensive than the KSZ9031...
lexano has quit [Quit: Leaving]
<azonenberg>
Yeah honestly i wouldnt use anything but RGMII in a new design
<azonenberg>
unless i was trying to put a zillion phys on something like a switch
<awygle>
I was doing math for a notional 20 port switch
<azonenberg>
So, my plan is a 24+2 port switch
<azonenberg>
using a 7a200t in ffg1156
<azonenberg>
16 GTPs, half used as eight 1000base-X SFPs
<azonenberg>
and the other half used as two XAUI links to 10gbps SFP+s
<azonenberg>
Then another 16 RGMII ports using KSZ9031s
<azonenberg>
I need the 200t in ffg1156 to get enough transceivers (I wanted a lot of 1g optic ports as well as the 10g stuff)
<azonenberg>
at which point i have 500 GPIOs
<azonenberg>
and i can easily afford a lot of RGMII lanes
<awygle>
I was looking at 20+2+2 in a cyclone 10 GX