nicoo has quit [Remote host closed the connection]
nicoo has joined #glasgow
mwk has quit [Ping timeout: 260 seconds]
GNUmoon has joined #glasgow
samlittlewood has quit [Quit: samlittlewood]
samlittlewood has joined #glasgow
Eli2_ has joined #glasgow
Eli2 has quit [Ping timeout: 265 seconds]
GNUmoon has quit [Ping timeout: 240 seconds]
jn__ has joined #glasgow
egg|anbo|egg has joined #glasgow
egg|anbo|egg has quit [Remote host closed the connection]
egg|anbo|egg has joined #glasgow
richbridger has joined #glasgow
mwk has joined #glasgow
GNUmoon has joined #glasgow
egg|anbo|egg has quit [Remote host closed the connection]
pie_ has quit [Quit: pie_]
pie_ has joined #glasgow
Eli2_ has quit [Remote host closed the connection]
nyanotech has quit [Ping timeout: 250 seconds]
nyanotech has joined #glasgow
nyanotech has quit [Ping timeout: 245 seconds]
fridtjof[m] has joined #glasgow
nyanotech has joined #glasgow
<fridtjof[m]>
to hell with this idle kicking thing
<fridtjof[m]>
there's apparently a bug (might file an issue later) where your matrix client(s? only some of them? idk?) will just drop rooms you're no longer part of from the room list
<fridtjof[m]>
There's a "Historical" section (which does contain two rooms i just left on my own), but it's so unreliable that I had to remember the channels I was in off the top of my head. It kind of felt like it was actively gaslighting me
<whitequark[m]>
Element?
<fridtjof[m]>
yeah
<whitequark[m]>
yes, it does have some bugs that feel like it's gaslighting you
<whitequark[m]>
i'm very unhappy about it
<agg>
the irc bridge kicking thing is really annoying, not least because it's not clear how good it is about the "30 days idle" thing
<whitequark[m]>
the developers do seem to care at least
<agg>
but it seems like a complicated political issue more than anything
<whitequark[m]>
yes
<agg>
i don't fully understand freenode's policy given the prevalence of discord/mattermost/etc bots anyway, though the matrix "one connection per bridged user" thing is nicer to interact with
<agg>
but we have so many people kicked from the rust-embedded channel who don't even care about irc anyway, which is annoying
<whitequark[m]>
they seem to dislike connection storms when the bridge flops
<agg>
as though freenode never has netsplits
<fridtjof[m]>
Yeah, the puzzling part is that it does not seem to properly account for read status when kicking people. I regularly read this channel, yet I got kicked anyway
<fridtjof[m]>
From skimming related issues, there might have been performance issues that kept them from fixing this...?
<agg>
huh, i hope that gets deployed then, looks like it would help
<agg>
i thought it already used presence to detect idle rather than purely sending messages, jeez
<fridtjof[m]>
at least it does seem to keep history, and "only" lets the room fall off the list
<russss>
yeah I feel like these are Matrix's issues rather than Freenode's
<sknebel>
FWIW, Freenode does allow smaller bridges that behave (=are stable) to turn that setting off
<agg>
maybe we should self-host a bridge for the one channel then, even in the worst case flop it would only affect us and total user count would be low, hum
<fridtjof[m]>
I think most of the annoyances associated with IRC autokick could be alleviated simply by fixing element's UX around kicking/leaving and historical rooms
<agg>
or just remove them from the irc side and re-connect them when they say something, but that's where it hits the policy issue perhaps
<russss>
fwiw freenode has no problem with us (irccloud) keeping users connected while they're active or paying. We only auto-disconnect users if they're non-paying after 2 hours of inactivity (based on whether they have the app open, not whether they're talking).
<russss>
so this is why I say I suspect it's Matrix's problem, I don't see why they shouldn't keep people connected if they're reading the channel
<russss>
I also don't feel like Matrix's gateway is substantially less reliable than irccloud these days.
Foxyloxy has quit [Quit: Leaving]
<sorear>
how is the bridge these days on randomly losing messages or delaying them for >hour
FFY00_ has quit [Read error: Connection reset by peer]
FFY00_ has joined #glasgow
<whitequark[m]>
try it
bvernoux has joined #glasgow
egg|anbo|egg has joined #glasgow
egg|anbo|egg has quit [Remote host closed the connection]
egg|anbo|egg has joined #glasgow
Foxyloxy has joined #glasgow
ali_as has quit [Ping timeout: 240 seconds]
<gruetzkopf>
i should (but freenode is one of the more reliable networks for my bouncer connection)
<sorear>
if it's relatively reliable then I won't get much information from brief casual use
tflummer3 has joined #glasgow
analprolapse_ has joined #glasgow
smeding_ has joined #glasgow
svenpeter has quit [Ping timeout: 276 seconds]
analprolapse has quit [Ping timeout: 276 seconds]
emily has quit [Ping timeout: 276 seconds]
midnight has quit [Ping timeout: 276 seconds]
tflummer has quit [Ping timeout: 276 seconds]
smeding has quit [Ping timeout: 276 seconds]
analprolapse_ is now known as analprolapse
tflummer3 is now known as tflummer
midnight has joined #glasgow
svenpeter has joined #glasgow
emily has joined #glasgow
smeding_ is now known as smeding
jstein has joined #glasgow
Eli2 has joined #glasgow
bvernoux1 has joined #glasgow
bvernoux has quit [Ping timeout: 252 seconds]
<fest>
hmm, I've read in the irclogs that it should be easy to saturate the USB port. it appears I'm overrunning the fifo in no time, so I wonder how exactly am I supposed to hold it correctly
<whitequark[m]>
what are you doing?
<fest>
so far I've got an iface.read(339*256*2) in applet's run(), and ~6MiB/s worth of FIFO writes in the design. in the design I'm outputting fifo's w_rdy signal to output, which I observe with external logic analyzer. I'm seeing the output going low after ~4k bytes (in_fifo depth
<fest>
the data I receive until fifo overflow is correct (every line from sensor starts with known pattern)
<whitequark[m]>
are you calling iface.read in a loop?
<fest>
so far I've got just one iface.read call, and in the logic analyzer I'm triggering on the very start of execution
<fest>
oh,
<fest>
mm, no, I didn't get the right amount of data if iface.read returned early
<fest>
wouldn't*
<whitequark[m]>
you're hitting a buffering issue, most likely
<fest>
last I worked with fx2 I was seeing such issues when not enough transfers had been enqueued
<whitequark[m]>
use the `-vvv` CLI option to see how the transfers are queued
<fest>
hmm, it appears I'm receiving data in small chunks, so I probably have to look how flushing works
<fest>
yep, disabling auto_flush helped
<fest>
thank's for the pointer!
bvernoux1 has quit [Read error: Connection reset by peer]