<sb0>
whitequark, btw what power supplies and atx connectors did you order for sayma?
<sb0>
and are they in the lab already?
<whitequark>
sb0: oh I decided to just go to SSP and grab the video card power adaptors alongside the media converter
<whitequark>
that's the absolute easiest variant, no fancy expensive power supply to order
<sb0>
okay. there is a 12V 5A power supply on the table, should be enough for 3 cards without RTM
<whitequark>
those video card power adaptors attach to 4-pin molex or sata power connectors from the other side
<whitequark>
so any regular ATX PSU works
<whitequark>
I don't want to bother wiring anything, I'll just grab one alongside
<sb0>
you'll have to wire stuff to make the PSU start
<whitequark>
well I suppose I'll manage to use a whole one jumper
<sb0>
and are those PSUs happy with a load on 12V only?
<whitequark>
sure
<sb0>
well that's pretty far from normal operation
<whitequark>
typically rated at far higher than 5A@12V anyway because that's what the video cards and hard drives suck most power from
<whitequark>
if you insist I can go the other way but this seems safer to me, these PSUs are probably some of the widest tested ones
<GitHub138>
[artiq] whitequark commented on issue #685: @dhslichter You wanted an order of magnitude faster? Well, how about ~2.2 MB/s? The throughput graph looks like this now. I believe it's limited by TCP Slow Start at the very beginning:... https://github.com/m-labs/artiq/issues/685#issuecomment-326161749
<whitequark>
sb0: ^
<sb0>
ok, well I'm fine with ATX if it really works correctly
<whitequark>
those ATX supplies are really very resilient, you can do crazy shit with them like connecting two in series and powering a load like a Tesla coil, they eat it and feel fine
<sb0>
I'm just worried about some cheap design going out of regulation due to the unusually unbalanced loads and causing some damage, especially when combined the high current capability you mention
<whitequark>
well I'm not going to buy the cheapest supply SSP can offer, right?
<sb0>
rjo, I'll let you refactor the AD9154 and AD9154JESD classes that are currently in phaser.py and I believe need some cleanup and reorganization, and put the equivalent code in sayma_amc_standalone.py
<GitHub153>
[misoc] sbourdeauducq pushed 1 new commit to master: https://git.io/v5lnK
<GitHub153>
misoc/master 30e5e79 Sebastien Bourdeauducq: cpu_interface: export CSR size instead of number of buswords in CSV
<GitHub65>
artiq/sinara 9edff2c Sebastien Bourdeauducq: remote_csr: interpret length as CSR size, not number of bus words
<sb0>
_florent_, you should be able to put the clocking code here, simply referencing csr::converter_spi (see ad* files in the same folder), and it should traverse the bridge etc.
<whitequark>
sb0: ah sorry looks like I won't make it to SSP in time today, it closes at 1900...
<whitequark>
it's 0200 over in the US though so I suppose next morning is just as good
tangrs has quit [Ping timeout: 255 seconds]
tangrs has joined #m-labs
<GitHub58>
[artiq] whitequark commented on issue #685: @dhslichter You wanted an order of magnitude faster? Well, how about ~2.2 MB/s? The throughput graph looks like this now. I believe it's limited by TCP Slow Start at the very beginning:... https://github.com/m-labs/artiq/issues/685#issuecomment-326161749
rohitksingh_work has quit [Ping timeout: 240 seconds]
rohitksingh_work has joined #m-labs
rohitksingh_wor1 has joined #m-labs
rohitksingh_work has quit [Ping timeout: 240 seconds]
sb0 has joined #m-labs
rohitksingh_wor1 has quit [Read error: Connection reset by peer]
<key2>
hey, what would be an easy way to have multiple FPGA bistream in a SPI nor flash, and chose from the fpga on which one to reboot ?
<cr1901_modern>
key2: ice40 has a COLDBOOT/WARMBOOT primitive. Maybe that could give you ideas
<cr1901_modern>
(well it's called SB_WARMBOOT*)
<key2>
yeah but am on xilinx 7 series
<cr1901_modern>
key2: Oh, I don't know the primitives for Xilinx, but they must exist
<GitHub19>
[smoltcp] whitequark pushed 2 new commits to master: https://git.io/v58vF
<GitHub19>
smoltcp/master 7e2dc1a whitequark: Allow querying the size of the TCP transmit and receive buffers....
<GitHub19>
smoltcp/master b6f7529 whitequark: TCP socket debug messages "sending <flags>" should be at DEBUG level....
<travis-ci>
m-labs/smoltcp#201 (master - 7e2dc1a : whitequark): The build passed.
<GitHub16>
[artiq] whitequark commented on issue #685: @dhslichter You wanted an order of magnitude faster? Well, how about ~2.2 MB/s? The throughput graph looks like this now. I believe it's limited by TCP Slow Start at the very beginning:... https://github.com/m-labs/artiq/issues/685#issuecomment-326161749
<GitHub132>
[smoltcp] whitequark pushed 2 new commits to master: https://git.io/v583X
<GitHub132>
smoltcp/master fac42e9 whitequark: Fix an inaccurate comment.
<GitHub132>
smoltcp/master 01021e4 whitequark: Fix an unused import warning.
<GitHub68>
[artiq] cjbe commented on issue #685: Ooops - that build (which conda install gave me) was from the sinara branch, hence does not have the latest fixes. Despite that, using 3.0.dev+1280.g20f43d57 I see the same problem. ... https://github.com/m-labs/artiq/issues/685#issuecomment-326318264
<GitHub22>
[artiq] whitequark commented on issue #685: @jordens No timeouts (those go into the log at WARN level, easily noticeable). The quiet periods are all on the host side. One of them is the compiler latency, the other I'm not sure. https://github.com/m-labs/artiq/issues/685#issuecomment-326329190
<GitHub38>
[artiq] whitequark commented on issue #685: @jordens No timeouts (retransmissions on the transmit side and out-of-order packets on the receive side go into the log at WARN level, easily noticeable). The quiet periods are all on the host side. One of them is the compiler latency, the other I'm not sure. https://github.com/m-labs/artiq/issues/685#issuecomment-326329190
<rjo>
whitequark: could you send me the packet dump?
<GitHub22>
[artiq] whitequark commented on issue #685: @jordens Actually, I'm not sure why the graph above is so pessimistic. Here I ran three kernels in rapid succession. It takes about 9ms from the end of one kernel to the start of another. No 200 ms quiet periods:... https://github.com/m-labs/artiq/issues/685#issuecomment-326334617
<whitequark>
rjo: updated
<GitHub126>
[artiq] whitequark commented on issue #685: @jordens Actually, I'm not sure why the graph above is so pessimistic. Here I ran three kernels in rapid succession. It takes about 9ms from the end of one kernel to the start of another. No 200 ms quiet periods:... https://github.com/m-labs/artiq/issues/685#issuecomment-326334617
<GitHub17>
[artiq] whitequark commented on issue #685: @jordens Actually, I'm not sure why the graph above is so pessimistic. Here I ran three kernels in rapid succession. It takes about 9ms from the end of one kernel to the start of another. No 200 ms quiet periods:... https://github.com/m-labs/artiq/issues/685#issuecomment-326334617
<whitequark>
rjo: looks like cjbe's machine does not do fast retransmit.
<whitequark>
I send four duplicate ACKs and all I get back is exponential timeout
<rjo>
whitequark: i have never seen that.
<whitequark>
rjo: look at his dump
<rjo>
ah. i missed that
<rjo>
whitequark: packet 113 is a fast retransmit.
<rjo>
it falls apart after that.
<GitHub126>
[smoltcp] steffengy commented on issue #36: To clarify, since it wasn't really clearly worded:... https://git.io/v58RY
<GitHub163>
[smoltcp] whitequark commented on issue #36: Yeah, I understood. I haven't had time to flesh out the ideas I had, hence the silence. https://git.io/v58RB
<whitequark>
rjo: why though?
<whitequark>
why doesn't e.g. packet 170 trigger fast retransmit
<rjo>
whitequark: do you also get those out-of-order packets on your machine (121, 138, 139 etc)?
<rjo>
whitequark: i wonder what the proper behavior is if you are only strictly handling packets in order. don't you need to quench the window or something like that?
<rjo>
since with duplicate acks (for fast retransmit) you can at most signal the loss of "the next" packet, the other side still needs to painfully figure out that you have not accepted any of the the following packets.
<whitequark>
I don't signal SACK so what's the difference?
<whitequark>
if it doesn't retransmit "the next" packet we will get nowhere
<rjo>
so it tries to do to things at once, feed you lost packets at the low end of the window and continue feeding at the "far" end of the window.
<whitequark>
hmm
<rjo>
but that is just a wild guess and i am shaky on tcp.
<GitHub168>
[smoltcp] whitequark pushed 1 new commit to master: https://git.io/v58uz
<GitHub168>
smoltcp/master 017210e whitequark: Exhaustion of transmit buffers should not be a reportable error.
<travis-ci>
m-labs/smoltcp#203 (master - 017210e : whitequark): The build passed.
<GitHub162>
[artiq] whitequark commented on issue #685: > Note that this is only host to core device though, core device to host is still 300 kB/s. [...] I think it might be an artifact in the test.... https://github.com/m-labs/artiq/issues/685#issuecomment-326354879
<whitequark>
rjo: so... the only remaining issue AFAICT is this lack of fast retransmit
<whitequark>
I might have to rethink window management
<whitequark>
the fault injector implements a typical token bucket rate limiter, to approximate our situation with buffers.
<whitequark>
--rx-rate-limit sets the bucket size in packets.
<whitequark>
--shaping-interval sets the interval at which the bucket is refilled (in full)
<whitequark>
if the interval is 5ms+ then TCP works just fine, specifically there are fast retransmits and regular retransmits, but no delay over 200ms is ever observed
<whitequark>
if the interval is 1ms then there's this exponential behavior
<whitequark>
rjo: ok, indeed it looks like I can reproduce this problem in controlled environment
<GitHub171>
[artiq] gkasprow commented on issue #813: @sbourdeauducq This mode would be very slow because only SPIx1 can be used. I tried this approach but didn't manage to configure any of FPGAs. That's why I resoldered resistors and RTM FPGA is loaded directly by AMC FPGA. Another reason could be due to long unterminated stub of config clock. I didn't inverstinage it further because you requested direct configuration in slave mode and I added this Xlinx co
<whitequark>
it appears that limiting the size of the receive window to mss * number of packet buffers actually *causes* this
<whitequark>
because the sender doesn't get quite enough challenge acks to trigger fast retransmit
<whitequark>
but if I raise it, it still shows behavior that includes exponential backoff, just locally
<whitequark>
i wonder why didn't lwip trigger this...
<GitHub57>
[smoltcp] batonius opened issue #37: The `ping` example is currently broken. https://git.io/v543l
<GitHub165>
[smoltcp] batonius opened pull request #38: Refactor Socket::process into Socket::would_accept and Socket::process_accepted (master...would_accept) https://git.io/v54WG