<GitHub159>
buildbot-config/master 569fe8d whitequark: Use notice on IRC.
bb-m-labs has joined #m-labs
<sb0>
apparently the onchipUIS guys are doing a USB 3.0 PHY
<sb0>
if that works, there would be a radical solution to the GTX issues
<cr1901_modern>
I wonder how much more difficult it is than a 2.0 PHY (in a paper about bit stuffing, I've read a 2.0 PHY can be done in 1-2KLOCs. The hard part is digesting the spec)?
<sb0>
and the "we want a hard CPU" people could also get one that doesn't stink
<sb0>
cr1901_modern, USB 3.0 is 10Gbps, so you have more analog issues
<sb0>
and KLOCs of what?
<cr1901_modern>
sb0: Erm, I amend my prev comment. The paper was most likely discussing only the Serial Interface Engine part of the PHY
<cr1901_modern>
and lines of verilog code
<cr1901_modern>
USB's physical layer is differential pairs, correct? So the analog bits (i.e. the part that onchip is making) are "just" sophisticated differential transceivers?
<GitHub58>
conda-recipes/master 56c3545 Robert Jordens: jesd204b: bump
<rjo>
whitequark: i was thinking we could push that branching into the gateware. the gateware would serialize the rtio events received from the register interface and either send them downstream (drtio) or record them locally in the DMA sequence.
<rjo>
it needs to do that serialization anyway for DRTIO.
<whitequark>
rjo: perfect
<whitequark>
and it needs bidirectional DMA anyway
<rjo>
whitequark: when we have the rust coroutines exposed as a better "with parallel", do you handle locks on e.g. the RTIO register API in software?
<whitequark>
rjo: i'm not sure i see the analogy between rust coroutines and "with parallel"
<rjo>
whitequark: iirc some time ago we wanted to (re-) implement "with parallel" so that each stmt is executed in a coroutine, i.e. without the starvation issue that the current implementation can suffer from.
<rjo>
was that all in my head?
<whitequark>
rjo: imo that plan is somewhere between "unnecessarily complicated" and "unrealistic"
<GitHub73>
artiq/phaser2 342b9e9 Robert Jordens: phaser: cap phy data width to 64 temporarily
<GitHub73>
artiq/phaser2 7664b22 Robert Jordens: phaser/conda: bump jesd204b
<GitHub73>
[artiq] jordens pushed 2 new commits to phaser2: https://git.io/vX5Ka
<whitequark>
I *think* the new LLVM coroutine support (which isn't even in 3.9 yet) will make that possible
<whitequark>
but it's very complicated, and it's not yet clear how it interacts with our memory management scheme
<sb0>
I don't think it's very complicated, but it's certainly slow
<sb0>
for each event the CPU would have to iterate a list, select the soonest one
<sb0>
plus the context switches
<rjo>
i am ok with not doing it. i think the starvation issue is less likely to be an actual problem.
<whitequark>
each event?
<whitequark>
why would you need to do it for each event?!
<whitequark>
oh wait, I realize why
<sb0>
well you can have heuristics to not do it for each event
<whitequark>
yeah, this is even worse than I thought
<sb0>
but I think they'll still be slow
rohitksingh has joined #m-labs
rohitksingh has quit [Quit: Leaving.]
<sb0>
sfps are fun, they really output all sorts of random data when the link is dying
<sb0>
this is of course spiced up by the xilinx cdrs that can't tell you when they're locked
<sb0>
and the garbage data output lasts for a good second
<sb0>
sfp and/or gtx, I don't know what the culprit is...
<sb0>
let me see if "no run length error for the past couple milliseconds" is a reliable test
<rjo>
sure. those are limiting amplifiers. they crank up the gain until they think they see something.
<rjo>
the SFPs at least.
<rjo>
whitequark: can i pass a TList(TInt32) to a syscall by reference?
<rjo>
whitequark: oh. struct artiq_list?
<whitequark>
yep
<rjo>
which was removed...
<whitequark>
hrm, just recreate it
<rjo>
sure.
FabM has quit [Quit: ChatZilla 0.9.93 [Firefox 45.5.0/20161115214506]]