GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #glasgow
PyroPeter_ has joined #glasgow
PyroPeter has quit [Ping timeout: 276 seconds]
PyroPeter_ is now known as PyroPeter
egg|anbo|egg has joined #glasgow
egg|anbo|egg has quit [Remote host closed the connection]
electronic_eel_ has joined #glasgow
electronic_eel has quit [Ping timeout: 276 seconds]
<d1b2>
<rwhitby> Got the control.fusb302 applet to the point where a sink connection is made, and it responds to the SourceCaps message. Now I need to work out how to handle the INT_N interrupt line (which is connected to A2) in the gateware and make it call my handle_irq() routine in the applet.
<d1b2>
<rwhitby> I have 24ms to detect the interrupt, read the source caps message out of the fusb302 fifo via I2C and then send a request message through the fusb302 fifo via I2C.
<d1b2>
<rwhitby> (if I don't respond in time, the source sends a hard reset as shown above)
<d1b2>
<rwhitby> So, what's the best way in glasgow to turn a falling edge on A2 into execution of a function in python?
<whitequark[m]>
that can be surprisingly complex
<whitequark[m]>
do you need a fast response to that falling edge?
<whitequark[m]>
* do you need to respond to that falling edge in bounded time?
<d1b2>
<rwhitby> Less than 5ms would be ideal.
<whitequark[m]>
then you should not involve python at all
<whitequark[m]>
you can have it in python and get a response in 5ms... most of the time, on fast machines, while pegging a core
<d1b2>
<rwhitby> The 24ms to respond (sense INT_N edge, read fusb302 fifo over I2C, write fusb302 fifo over I2C) is the hard bound.
<whitequark[m]>
yeah you need a gateware sequencer to have that work reliably
<whitequark[m]>
either a plain FSM, or something slightly more complex
<whitequark[m]>
(the line between "complex FSM" and "simple CPU") is extremely blurry
<whitequark[m]>
* (the line between "complex FSM" and "simple CPU" is extremely blurry)
<d1b2>
<rwhitby> If my time bound was more like 20ms, would there be a way to do it in python?
<whitequark[m]>
USB communication is in general unbounded
<whitequark[m]>
if you have any hard bound you'd like to reliably meet, it's time to write gateware
<whitequark[m]>
however, if you are OK nondeterministically missing that bound (with it becoming worse on low-end hardware like rpis), then doing it in python is feasible
<d1b2>
<rwhitby> yeah, understood on that. I seem to be getting about 1ms response on I2C transactions at the moment.
<whitequark[m]>
yes, this is probably 99th percentile or something
<whitequark[m]>
(we should add a benchmark quantifying that)
<whitequark[m]>
you will see significant latency spikes well beyond that time
<d1b2>
<rwhitby> for my first pass at this applet, I'd like to do the response in python. then as a second pass later, I'd move it to gateware. I don't have the free time to learn too many things at once 🙂
<d1b2>
<rwhitby> This 24ms response is only for the first request. Once we get to VDMs, the response time requirements are way longer.
<d1b2>
<rwhitby> And for the 1% long response, the hard reset will get everything back to the starting point.
<d1b2>
<rwhitby> I see the DAC alerts have their own pin and USB request type. Is there something like that for applets to access?
<d1b2>
<rwhitby> (for edges on pins)
<whitequark[m]>
no, edges on pins must be done through the generic applet API
<whitequark[m]>
so either a FIFO or an I2C register
<d1b2>
<rwhitby> right, so polling the I2C register is probably the fastest way. Is it architecturally possible to have something alonside the I2C register which is push from gateware to applet rather than polling?
<whitequark[m]>
FIFO
<d1b2>
<rwhitby> ok, so FIFO is claimed by the I2C subtarget, and I want to keep the other free for the analyzer.
<whitequark[m]>
then you'll have to wait until the FIFO multiplexer is implemented, sorry
<d1b2>
<rwhitby> I2C with interrupt is a common pattern.
<whitequark[m]>
you can't really return out-of-band messages in the kind of protocol the I2C subtarget uses over its FIFO
<whitequark[m]>
there's not enough framing to deterministically detect events
<d1b2>
<rwhitby> ok, I'll poll an I2C register with an asynchronous task in the applet in the meantime and see how that goes.
<whitequark[m]>
sounds good!
<d1b2>
<rwhitby> BTW, thanks for all the glasgow and nmigen stuff - having lots of fun working with this.
<whitequark>
i'm glad you like it :)
<d1b2>
<rwhitby> would the FIFO multiplexer also handle future things like wanting to run I2C on some pins and UART on other pins on the same port?
<whitequark[m]>
yes
<whitequark[m]>
unrelated: i like how you talk on discord and i talk on matrix, and yet the messages are bridged between the services through irc
<rwhitby>
or I can short circuit the IRC path too
<d1b2>
<rwhitby> so how will the FIFO multiplexer handle things like the variable data response sizes in the I2C stream?
<d1b2>
<rwhitby> I was thinking about a routing packet at the start of each applet request/response, but the length is not known up front.
<whitequark[m]>
it would splice the data going over the multiplexed FIFOs into packets pessimistically, i.e. assuming that a packet ends when there's nothing in the buffer, even if the response might still be produced
<d1b2>
<rwhitby> or are you thinking of a "stop" packet that each applet sends at the end of each sequence?
<whitequark[m]>
the specific framing i am thinking of is COBS
<d1b2>
<rwhitby> is there any way that there could be a NULL entry in the FIFO, which doesn't carry any data, and is ignored by all applets, but wakes up the applet so that it can poll the I2C registers?
<d1b2>
<rwhitby> (to save continuous polling)
<whitequark[m]>
no. the FIFOs are byte oriented
<whitequark[m]>
(USB is not, but the FIFOs being byte oriented is a fundamental part of the design and going away from it will make it much harder to migrate to revE, which will work over TCP)
<sorear>
revE uses one TCP connection per fifo?
<whitequark[m]>
it doesn't exist yet, but that's the plan
<d1b2>
<rwhitby> ok, I understand now. the out_fifo in the i2c initiator just sends ack status and read data and there's no way to insert another value into that stream which is unambiguous.
<whitequark[m]>
think of what happens when you are reading from FIFO
<whitequark[m]>
suppose the I2C responder stretches the clock and asserts an IRQ at the same time
<whitequark[m]>
how do you signal that?
<d1b2>
<rwhitby> yep, gotcha.
<d1b2>
<rwhitby> how does REQ_POLL_ALERT work? Is it only called when the voltage command is used? There's no asynchronous alert reporting?
<whitequark[m]>
there is currently no async alert reporting
<whitequark[m]>
this can be added and exposed to applets
<whitequark[m]>
this would require some work on FX2 side and a significant extension of the framework
<d1b2>
<rwhitby> this is the type of thing I was thinking of. Some way for applet gateware to "wired-or" raise an alert (shared across all applet gateware), and the running applet can register a function called when the alert occurs, upon which it can then read the I2C registers to determine the alert source and take appropriate actions.
<whitequark[m]>
yes this is reasonable and what the alert stuff was initially added for
<whitequark[m]>
but it's unlikely to be implemented very soon
<whitequark[m]>
it's a complex piece of work that cross-cuts the entire stack
<whitequark[m]>
tbh, it's good that you raised it, because it gives me some useful insight into how the framework should eventually look like
<whitequark[m]>
you could view glasgow as a bunch of bespoke SoC peripherals bridged over something like etherbone to the host PC
<whitequark[m]>
basically, glasgow is similar to nmigen-soc in many ways
<d1b2>
<rwhitby> is it something that you would theoretically accept a PR for, assuming it met the high standards that such an intrusive change would require?
<d1b2>
<rwhitby> I realise you have no history with me, so would rightly assume to get crap by default 🙂
<whitequark[m]>
given the current state of the review queue (both glasgow and nmigen) I'd probably not even get around to reading it much less accepting
<whitequark[m]>
no, I make no such assumptions
<whitequark[m]>
I expect every contributor I haven't seen before to have done their absolute best effort and respond based on that (even if sometimes I fail to express it well)
<whitequark[m]>
well, not really exclusive to contributors I haven't seen before
<whitequark[m]>
it's just that Glasgow (and nMigen too!) grew way faster than I was prepared to, and now I have quite a... what's the social equivalent of technical debt? whatever that is.
<d1b2>
<rwhitby> yep, I know the feeling. I ran a couple of open source projects about 10 years ago (nslu2-linux.org, webos-internals.org) and almost got snowed under. eventually I found the right people to take some of the load, and eventually take over the projects when I could not give them the time they deserved.
<d1b2>
<rwhitby> heh, the former of those doesn't even have a working website any more.
<d1b2>
<rwhitby> But the http://ipkg.nslu2-linux.org/ package feeds are still there for anyone who still runs an NSLU2 device.
<whitequark[m]>
frankly, the two main problems with the Glasgow codebase atm is that (a) it is not clear which standard are people supposed to adhere to, and (b) that the existing code i wrote does not even adhere to the standard i hold it to myself
<whitequark[m]>
which are interrelated, of course
<whitequark[m]>
so a few programmers who are very experienced matching the existing style, semantics, etc contribute just fine, but most people have varying degree of trouble, and it's *not their fault(
<whitequark[m]>
* so a few programmers who are very experienced matching the existing style, semantics, etc contribute just fine, but most people have varying degree of trouble, and it's not their fault
<whitequark[m]>
that may and will change, just not overnight
<d1b2>
<rwhitby> yes, matching the existing style is the high standard I was talking about. not just a written standard, but an implied standard.
<d1b2>
<rwhitby> there's nothing currently which is a push from gateware through fx2 to device/hardware.py and applet, right?
<d1b2>
<rwhitby> it's all responses to requests
<whitequark[m]>
you can wait on FIFOs
<whitequark[m]>
but that's it
<whitequark[m]>
`await fifo.read()` is not polling anything, it is a push from device
<whitequark[m]>
(well... ok, the USB controller is polling, but that's details. from the gateware and software POV, no polling)
<d1b2>
<rwhitby> right
<d1b2>
<rwhitby> I need to read up on FX2 more and USB interrupt endpoints to get to a point where I can participate better in a discussion on this.
<whitequark[m]>
the endpoint type is not really important here, I mean, bulk endpoints have better average-case latency than interrupt endpoints, and there's no guaranteed latency when you use libusb at all anyway
<whitequark[m]>
there will be a separate endpoint for alerts simply because it's convenient to do it this way; it doesn't have a lot to do with worst-case latency
<whitequark[m]>
(convenient and idiomatic)
<d1b2>
<rwhitby> so ideally there would be an alert endpoint, and it would be used to support the existing DAC alerts, and potentially be extendable to applet alerts?
<whitequark[m]>
something like that
<d1b2>
<rwhitby> and the IE0 ISR would queue something on that endpoint
<whitequark[m]>
it would poll the I2C register of the FPGA to figure out what applet raised the alert
<whitequark[m]>
or more abstractly, which "IRQ" it was
<whitequark[m]>
the IRQ-applet association only exists at the setup stage
<whitequark[m]>
* the IRQ-applet association only exists at the build stage of the applet framework
<d1b2>
<rwhitby> handle_pending_alert would queue the alert endpoint, not the ISR
<whitequark[m]>
there will be an asyncio task that polls the alert endpoint at all times and dispatches the IRQs to Python-side handlers
<whitequark[m]>
you can think of it as a small async OS kernel
<d1b2>
<rwhitby> right
<d1b2>
<rwhitby> would an appropriate mapping be send a byte on the alert endpoint which matches the "IRQ" register address of the applet that raised the alert?
<d1b2>
<rwhitby> (applets that have an IRQ create that register in the build phase)
<whitequark[m]>
no
<whitequark[m]>
all IRQs should be gathered together on the FPGA into one long register so that they can be read in one go
<whitequark[m]>
and reported simultaneously via the USB EP
<whitequark[m]>
it's not really clear what sort of design is necessary to handle level events and edge events
<whitequark[m]>
if you're planning to start implementing this now, probably don't
<d1b2>
<rwhitby> the alert endpoint carries the current value of the long register each time it changes?
<whitequark[m]>
yeah, just carries over all the lines
<whitequark[m]>
the way this could work is, perhaps, something like this...
<whitequark[m]>
when an IRQ line changes from 0 to 1 in the FPGA, it activates nALERT, and deactivates it when the FX2 starts reading out the long register, which captures the current values of every IRQ atomically with the nALERT deactivation
<whitequark[m]>
this avoids race conditions on readouts
<d1b2>
<rwhitby> oh, I'm far away from familiar enough with glasgow internals to start any implementation 🙂
<whitequark[m]>
just making sure you won't be disappointed by a long wait
<whitequark[m]>
happy to discuss the potential implementation anytime
<d1b2>
<rwhitby> I have no expectations of anyone 🙂
<whitequark[m]>
this is as useful to me as to anyone else
d_olex_ has joined #glasgow
d_olex__ has joined #glasgow
<d1b2>
<rwhitby> ah, so the existing DAC interrupt is just the first line in the register, and other applets add other lines, and it goes through the current nALERT pin to the FX2.
<whitequark[m]>
something like that
<whitequark[m]>
the ADC interrupt you mean?
<d1b2>
<rwhitby> yes, sorry ADC.
<whitequark[m]>
yeah so the first byte or a few will probably be dedicated to fixed functions like ADC. and then we have up to 64 bytes that applets can play with
<whitequark[m]>
i'm thinking that a limit of 256 distinct IRQs is a good starting point
<whitequark[m]>
on revE, IRQs would probably use a separate sideband TCP channel with some sort of subscription mechanism
<whitequark[m]>
sorear:
<d1b2>
<rwhitby> what other push sources (in addition to ADC) do you envision in future revs?
<whitequark[m]>
* sorear: speaking of revE, it will probably have a built-in gateware multiplexer so you can forward just one TCP link
<whitequark[m]>
rwhitby: none in particular
d_olex has quit [Ping timeout: 246 seconds]
<d1b2>
<rwhitby> the 64 bytes limit being the endpoint packet size?
<whitequark[m]>
the FX2 buffer size for that EP
<d1b2>
<rwhitby> i see.
d_olex_ has quit [Ping timeout: 276 seconds]
<sorear>
whitequark[m]: the gateware would support both multiplexed and non-multiplexed operation?
<d1b2>
<rwhitby> ok, so build process creates fixed hardware function registers, and applets can extend that with applet-specific alert registers, and common gateware joins all that into a single nALERT line which tells the firmware to read the whole register into the new alert endpoint.
<whitequark[m]>
sorear: this is probably unavoidable; both easy firewall traversal and lack of head-of-line blocking are desirable
<whitequark[m]>
rwhitby: ADCs are polled by FX2, not gateware
<whitequark[m]>
(ADCs are considered a safety mechanism and gateware is fully user specified and can be faulty, unlike FX2 firmware in normal usage)
<d1b2>
<rwhitby> ok, the fixed hardware is polled by fixed firmware.
<whitequark[m]>
yes
<d1b2>
<rwhitby> is the contents of the alert endpoint stuffed by the firmware reading gateware alert registers like it reads ADC registers, or are gateware alert registers hooked directly to the FX2 FIFO inputs?
<whitequark[m]>
aside: only after watching a bunch of videos by an elevator engineer i understood what was i actually doing with the safety mechanisms on glasgow. they're actually quite similar to the real ones protecting something much more valuable than a bunch of ICs, even though i didn't really understand what i was doing
<d1b2>
<rwhitby> (apologies if my limited understanding is getting concepts wrong here)
<whitequark[m]>
rwhitby: there is no way to connect anything to EP1IN/OUT in FX2
<whitequark[m]>
* rwhitby: there is no way to connect anything directly to EP1IN/OUT in FX2
<whitequark[m]>
so the firmware is juggling the bits a bit
<whitequark[m]>
this is ok because it has to poll ADCs anyway
<whitequark[m]>
polling the FX2 when nALERT is signaled is the same mechanism extended further
<kmc>
whitequark[m]: which elevator videos? :)
<whitequark[m]>
polling alerts is the main background task in the FX2 firmware so the response time on that is pretty good, although i don't think we have any formal WCET
<whitequark[m]>
a lot of nice insights into safety mechanisms, like how the speed limiter works, how elevator shaft doors are handled, and how one could write a firmware that keeps people safe. the deep dives into Otis GEN2 and the UL stations are a very fascinating contrast; both keep people safe in practice, but one of those things consists of excellent engineering and the other seems a bit like a student's work
<whitequark[m]>
the difference is that the Otis station requires next to no maintenance, and the UL station requires so much maintenance that every one used long enough will be Ship-of-Theseused into something that has no original parts :D
<rwhitby>
whitequark[m]: thx for the discussion, I learnt some stuff :-)
<whitequark[m]>
happy to help
<d1b2>
<rwhitby> I plan to implement async task polling of FPGA register in applet, while still thinking and conceptualising and hopefully discussing an alert push mechanism
<whitequark[m]>
I think the overall design is clear by now but the implementation is tricky
<whitequark[m]>
especially the end user APIs which are very hard to change once implemented
<d1b2>
<rwhitby> the experience gained from the former hopefully getting me to the point where the latter becomes clearer
<whitequark[m]>
actually polling an alert is just an `await alert`, but getting that `alert` object is where it becomes annoying
<whitequark[m]>
because it's not enough to just make it work, it also has to be introspectable and debuggable and participate in the logging and so on
<whitequark[m]>
the FX2 and gateware parts are essentially trivial in comparison
<whitequark[m]>
again with the OS analogy, the exposed Glasgow APIs are like the kernel syscall ABI: we may break it (we're not Linux) but it will cause pain
<d1b2>
<rwhitby> how many "out of tree" applets are you aware of?
<whitequark[m]>
none because there is no out of tree applet support
<whitequark[m]>
but... even the in tree applets are hard enough to change!
<d1b2>
<rwhitby> well, I mean github forks of glasgow that are implementing new applets
<whitequark[m]>
there are applets who are two API versions behind what I think new applets should be using
<whitequark[m]>
dunno, github forks are not something I aim to support
<whitequark[m]>
(other than the ones with active PRs)
<whitequark[m]>
until out-of-tree applets are implemented there are no guarantees whatsoever on applet API stability
<whitequark[m]>
in practice, it won't change too radically because `glasgow script` is a thing
<whitequark[m]>
but I absolutely reserve the right to make cross-cutting changes across all in-tree applets and then remove the old API
<d1b2>
<rwhitby> do people implementing applets usually contact you? or are the applet writers that you weren't aware of until they were completed?
<whitequark[m]>
I expect most people to not contact maintainers because of real and perceived burden this adds, and because many people just want a quick hack which they do not intend to upstream or support indefinitely
<whitequark[m]>
that's ok
<whitequark[m]>
but it does mean there is certain pressure to get things right "the first time"
<whitequark[m]>
which of course cannot happen, but we can at least try to get them not wildly wrong
<d1b2>
<rwhitby> yeah, it's a fine line between a useful interaction for awareness, and an interaction that causes additional maintenance burden. I hope to always stay on the side of the former.
<whitequark[m]>
I knew exactly what I'm getting into when I designed Glasgow in this particular modular way, but it still requires maintenance to happen; so setting boundaries, both personal and for the project itself, is something I take seriously
<whitequark[m]>
in a way: the project knows what it is because it knows what it isn't
<d1b2>
<rwhitby> It's good to have a project leader who knows how to clearly say no. That gave me the clarity to not try and include any Luna stuff on the usbpd board.
<whitequark[m]>
I'm glad that helped!
<whitequark[m]>
I know that if I do not explicitly and clearly restrict the scope, Glasgow will just become unmaintainable
<whitequark[m]>
especially given our very strong commitment to backward compatibility
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #glasgow
d_olex has joined #glasgow
d_olex__ has quit [Ping timeout: 272 seconds]
d_olex has quit [Ping timeout: 240 seconds]
electronic_eel_ has quit [Ping timeout: 246 seconds]
electronic_eel has joined #glasgow
d_olex has joined #glasgow
d_olex has quit [Ping timeout: 246 seconds]
<_whitenotifier-5>
[glasgow] whitequark closed pull request #280: README: use `python3` to run the macOS setup script - https://git.io/JtNW8
<_whitenotifier-5>
[glasgow] whitequark commented on pull request #280: README: use `python3` to run the macOS setup script - https://git.io/JtNQE
<_whitenotifier-5>
[GlasgowEmbedded/glasgow] whitequark pushed 1 commit to master [+0/-0/±1] https://git.io/JtNQu
<_whitenotifier-5>
[GlasgowEmbedded/glasgow] jspencer 41c48bb - README: use `python3` to run the macOS setup script so we don't hit the system Python which is a version 2.
GNUmoon2 has joined #glasgow
GNUmoon has quit [Ping timeout: 268 seconds]
Github[whitequar has joined #glasgow
Github[m] has joined #glasgow
<Github[whitequar>
[GlasgowEmbedded/glasgow] whitequark pushed to wip-i2c-timeout: WIP: firmware: guard I2C transactions with timeouts.
<Github[whitequar>
that caused it (for reads) or the next USB control request (for
<Github[whitequar>
writes) will silently hang forever until the device is power cycled.
<Github[whitequar>
If we don't add timeouts on I2C transactions, then the USB request
<Github[whitequar>
This is clearly undesirable. However, this does not happen at all
<Github[whitequar>
firmware size comes at a premium, and blindly spraying these guard
<Github[whitequar>
during normal operation, and although it hardens the device, the FX2
egg|anbo|egg_ has quit [Ping timeout: 240 seconds]
egg|anbo|egg has quit [Ping timeout: 265 seconds]
<agg>
whitequark[m]: re matrix, how so? I use rust-embedded from the irc side a lot and it generally seems good, I much prefer not having a single bridge user like d1b2
<whitequark[m]>
agg: I meant specifically the GH notification bot, check out the notices it posted recently
<agg>
Oh wow yep my eyes literally glazed over those messages
<agg>
Other downsides to be aware of include it occasionally just not bridging some users from matrix and also if a matrix room is bridged it will kick all matrix users who are "idle" for 30d, with a somewhat unclear or overeager idea of idle
<agg>
On the bright side it does relay edits and sends links to long posts and media
<whitequark[m]>
yeah it's a good bridge
<whitequark[m]>
I'm going to use it a lot more going forward
<agg>
The kicking people thing is annoying for primarily matrix rooms that happen to bridge to irc, doesn't affect just using matrix as your irc client I think
d_olex has joined #glasgow
d_olex_ has quit [Ping timeout: 240 seconds]
d_olex has quit [Ping timeout: 265 seconds]
egg|anbo|egg__ has quit [Remote host closed the connection]
d_olex has joined #glasgow
d_olex has quit [Ping timeout: 246 seconds]
d_olex has joined #glasgow
d_olex has quit [Ping timeout: 256 seconds]
<modwizcode>
I've never actually tried matrix, I use discord but I keep it pretty seperate from everything else
electronic_eel has quit [Ping timeout: 276 seconds]
<whitequark[m]>
iirc it's the condition freenode places on bridge operators
d_olex has quit [Ping timeout: 240 seconds]
<fridtjof[m]>
I just nerd sniped myself a bit on this one: so "idle" is apparently "isn't online (determined through matrix's presence API) at the time connection reaping takes place, and the last active time (determined through presence if online, or through the bridge constantly tracking last activity timestamps if not) exceeds the maximum"
<agg>
yea, I understand it's a requirement from Freenode, I think in principle the discord bridge here violates freenodes's policies? The problem I have is it seems to kick matrix users who claim they were not actually idle for 30d
<agg>
Hard to be sure of course, maybe they're just using a client that reports status differently or...
d_olex has joined #glasgow
<agg>
The number of complaints I've received has gone down recently but maybe that's just because everyone got kicked
<russss>
fwiw I'm pretty sure that Freenode doesn't impose any inactivity requirements on us (irccloud)
<russss>
although we do automatically disconnect inactive nonpaying users
d_olex has quit [Ping timeout: 264 seconds]
<fridtjof[m]>
The way discord is bridged has way less implications for freenode, as it's basically just a single bot
d_olex has joined #glasgow
d_olex has quit [Ping timeout: 240 seconds]
d_olex has joined #glasgow
<sknebel>
I don't think there was a strict requirement from Freenode to have the idle logic, but one of "please don't flap that many users all the time", and the idle timeout helped matrix to keep that down while their bridge still was/is unstable
<sknebel>
(i.e. I've requested increased connections-per-IP limits for a self-hosted matrix bridge citing this behavior of the matrix.org bridge as the reason we need our own, and Freenode was fine with that)
<russss>
ah true
FFY00 has quit [Ping timeout: 240 seconds]
FFY00 has joined #glasgow
FFY00_ has joined #glasgow
FFY00 has quit [Ping timeout: 276 seconds]
egg|anbo|egg has quit [Remote host closed the connection]
siriusfox has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
GNUmoon has joined #glasgow
GNUmoon2 has quit [Ping timeout: 268 seconds]
egg|anbo|egg has joined #glasgow
d_olex_ has joined #glasgow
d_olex has quit [Ping timeout: 276 seconds]
siriusfox has joined #glasgow
<d1b2>
<rwhitby> Still trying to work out how best to poll a device.register in an async task in the applet and get close to 5ms response. Is the best I can do an infinite loop with a device.read_register() and asyncio.sleep(0.001) in it? (like _monitor_errors in interface.uart, but with more aggressive polling period)
<d1b2>
<Attie> probably at the moment, yes...
d_olex_ has quit [Ping timeout: 260 seconds]
<d1b2>
<Attie> you could dynamically adjust that period though, so that it backs waay off once you've passed your critical interaction
<d1b2>
<rwhitby> so that would be a self._polling_period that I can just change elsewhere in the applet code?
<d1b2>
<Attie> note also, that sleeping for 1ms is likely to result in a pause in execution for that task of much more than 1ms...
<d1b2>
<Attie> (and it'll be jittery)
<d1b2>
<Attie> something like that, yeah
egg|anbo|egg_ has joined #glasgow
egg|anbo|egg__ has joined #glasgow
egg|anbo|egg___ has quit [Ping timeout: 265 seconds]
egg|anbo|egg has quit [Ping timeout: 264 seconds]
<d1b2>
<rwhitby> so uart does the _monitor_errors in interact(). I want to have the alert monitoring active while in repl too. Do I just start the async task in run()?
<d1b2>
<Attie> i think that should work... but, there is a repl() stage too, which might be the preferred setup point