<whitequark>
hopefully i can finish the rest of it tomorrow
balrog has quit [Excess Flood]
balrog has joined #m-labs
<sb0>
rjo, what about making the sliders also sensitive to the scroll wheel?
<sb0>
whitequark, why did you remove test_loopback_count?
<sb0>
and yes, we have nist_clock, with a half-populated TTL card
<cr1901_modern>
sb0: How would you decide which scroll wheel to move?
<cr1901_modern>
s/scroll wheel/slider handle/
<sb0>
cr1901_modern, the scroll wheel should change the zoom, not move the sliders
<cr1901_modern>
sb0: What did you mean by this then? (10:51:00 PM) sb0: rjo, what about making the sliders also sensitive to the scroll wheel?
<sb0>
that using the scroll wheel while the mouse is over the sliders should change the zoom
<cr1901_modern>
Ahh, so basically, the axis intercepts scroll wheel events for the sliders.
<sb0>
finally, zoom is working properly. yay!
<sb0>
now scrolling...
<sb0>
cr1901_modern, I guess you can unblock the sliders (i.e. allow min > max), as you can do it anyway via the spinboxes, and rjo wants reverse scans
<sb0>
(they won't be supported in the experiments yet and will throw an error at submission)
<sb0>
also min -> start, max -> end
<cr1901_modern>
sb0: yes, rjo and I came up with a todo list for that. Strictly speaking the widget is usable.
FabM has joined #m-labs
sb0 has quit [Quit: Leaving]
cr1901_modern has quit [Read error: Connection reset by peer]
sb0 has joined #m-labs
<rjo>
whitequark: yeah. why not. i remember playing with that when llvm was first praising it.
<rjo>
whitequark: do we really need to write back attributes that are never changed? that might become expensive quickly.
<rjo>
sb0: i don't think you can do that. the slider will move away from the cursor (that's a HIG no-no) and wheelEvent on the groove should be zoom like on the axis IMHO.
<rjo>
sb0: yes. it wheelEvent should zoom everywhere, shift-wheel should change number of points, drag should move axis, shift-drag should move all scan points (and both sliders). that would be a consistent, useful and unsurprising behavior.
<rjo>
cr1901_modern: but the drag and the number-of-points visualization should still land soon.
<sb0>
rjo, your break_realtime() in pdq2_simple.py isn't due to the compiler, but to the smooth handover
<sb0>
i.e. without it, the system considers all invokations of one() to be on the same timeline
<sb0>
same in transport.py
<rjo>
sb0: i know. but smooth handover is not the cause.
<sb0>
what's happening then?
<rjo>
the toolchain can't generate the waveform, upload it, compile the kernel and upload that in time. i would not hold smooth handover responsible for that.
<sb0>
without break_realtime and with smooth handover, it has only 1ms for that
<sb0>
is it still taking 10s?
<rjo>
yes
sj_mackenzie has joined #m-labs
<rjo>
(posting this so that it is logged somewhere and i can find my notes): ppp on pipistrello works nicely. two wrinkles: if the uart speed is cranked up to 921600 Baud, the uart rc ringbuffer and the ppp/tcp stack can't keep up. lots of lost segments, retransmissions and/or duplicate acks. increasing the rx ringbufffer to 2048 helps a lot (but the uart is not easily reconfigurable and the code as it is written has to fit the sram of the bios as well)
<rjo>
sb0: i could add a delay() and still have smooth handover. but anyway it is a technicality. and experiments will likely do break_realtime() if they have pdq2 or other slow devices in the scan.
<rjo>
they will have to decide whether to break_realtime() and get jitter or whether to delay() and risk RTIOSequenceError
cr1901_modern has joined #m-labs
FabM has quit [Remote host closed the connection]
FabM has joined #m-labs
sj_mackenzie has quit [Ping timeout: 240 seconds]
evilspirit has joined #m-labs
<sb0>
SequenceError? you mean underflow?
siruf has quit [Quit: leaving]
siruf has joined #m-labs
<whitequark>
sb0: re test_loopback_count: it was using ttl_inout
<whitequark>
and I run it like this bugpoint -mlimit=800 -compile-custom -compile-command ../crashwrapper.sh test1.ll
<whitequark>
well, first you need to get the LL file, ARTIQ_DUMP_LLVM=test1 artiq_run ... will do it.
<whitequark>
it's rough around the edges; it uses an objectmap filled with stubs. ideally objectmap would be serialized in artiq_run x.py and restored in artiq_run x.ll.
<whitequark>
rjo: re attribute writeback: that's what I've originally suggested to sb0. however, marking objects as modified turned out to be quite expensive, since there's no way to avoid that within loops
<whitequark>
oh, and there was the second problem, namely lists
<whitequark>
rjo: in principle this can be done cheaply wrt CPU time by serializing everything twice and comparing at writeback time, trading off coredevice RAM versus writeback time
<sb0>
whitequark, RAM is cheap (the KC705 can even take PC RAM modules), core device CPU time is not
<sb0>
it's a regular DDR3 SODIMM on that board
<sb0>
and it comes with 1GB
<whitequark>
ok. add an issue assigned at me if you (or rjo) think this should be done.
<sb0>
wouldn't the comparison be slow, though?
<sb0>
but probably still faster than sending everything...
<whitequark>
well, you have to compare the entire .data section.
<sb0>
rjo, when you zoom on a webpage (shift + wheel), whatever link is under the cursor also moves away from it
<sb0>
and that's fine IMO
<whitequark>
hmm, it's commonplace to have zoom that hones in on the point under cursor
<whitequark>
e.g. image editors and CAD programs all have it
<sb0>
oh actually, with the scanwidget, the cursor will stay under the cursor
<sb0>
so there is no problem at all
<sb0>
well it can move away from it a little bit, if you're a bit off
<whitequark>
wow, I just completely locked up my system using flterm
<whitequark>
I ran it as flterm --kernel /dev/zero.
<rjo>
sb0: but the sliders (the handles) can't be the target of the zoom. the entire underlying coordinate system can be. or the groove.
<rjo>
sb0: and people are used to wheelEvent on slider widgets -> move the handle. zooming instead is a surprise.
<rjo>
because the expected behavior (move slider on wheel) is not possible because there are two sliders, it might be best to have no reaction at all to wheelEvents on the double-slider-widget.
<rjo>
sb0: yes. underflow.
<rjo>
whitequark: ack. will need to dig it out when i need it.
FabM has quit [Remote host closed the connection]
key2 has joined #m-labs
<GitHub83>
artiq/master 8e77e56 whitequark: test: bring back test_loopback_count (fixes #295).
<GitHub83>
[artiq] whitequark pushed 1 new commit to master: https://git.io/v2RI8
<whitequark>
rjo: I cannot run your pdq2 mediator example.
<whitequark>
it crashes in self.setattr_device("electrodes")
<whitequark>
with Connection refused
<rjo>
whitequark: you need to run the pdq2 controllers referenced in the electrodes device_db.pyon entry.
<whitequark>
how do I do that?
<rjo>
you can actually skip that. just clear "pdq2_devices". it should work fine.
<rjo>
otherwise artiq_ctlmgr or manual starting of controllers.
<whitequark>
that worked
<whitequark>
(clearing)
<whitequark>
rjo: ok, yes, I figured out the problem with transport.py..
<whitequark>
# FIXME: We perform exhaustive checks of every known host object every
<whitequark>
# time an attribute access is visited, which is potentially quadratic.
<whitequark>
this blew up earlier than I expected.
<GitHub20>
[artiq] jordens created scanwidget (+2 new commits): https://git.io/v2Rlv
<GitHub20>
artiq/scanwidget 3ed8288 Robert Jordens: scanwidget: add from current git
<GitHub20>
artiq/scanwidget 485fc3b Robert Jordens: gui: use scanwidget
<whitequark>
rjo: #276 fixed.
<GitHub63>
[artiq] whitequark pushed 3 new commits to master: https://git.io/v2RR7
<GitHub123>
[artiq] whitequark pushed 1 new commit to master: https://git.io/v2R0A
<GitHub123>
artiq/master 6bd16e4 whitequark: Commit missing parts of 919a49b6.
<rjo>
hmm. that is an order of magnitude slower than what i would have wanted/hoped for. pipelining doesn't even help in these cases as each scan point (one kernel) is usually (and here especially) quite a bit less than one second.
<whitequark>
llvmlite ¯\_(ツ)_/¯
<whitequark>
note: we can very easily and cheaply remove the cost of re-parsing.
<whitequark>
so we can make it 150ms ARTIQ code 900ms llvmlite+LLVM
<rjo>
maybe there is also a lot of code generated that llvmlite/llvm need to optimize away. maybe we could look into reducing the the amount of code generated.
<whitequark>
yes, there is a lot of code generated. that's how you're supposed to use llvm.
<whitequark>
substantially reducing the amount of generated code will take more time than rewriting llvmlite to use llvm properly
<whitequark>
as for the time spent inside llvm itself... so far it's insignificant, but if/when it becomes significant, the first thing to do is to write a custom pass pipeline tailored for ARTIQ
<whitequark>
ARTIQ generates code quite unlike C compilers, and correctly ordering passes (e.g. running instcombine and SROA first) can reduce the amount of time wasted in the following ones by like an order of magnitude
<GitHub41>
[artiq] whitequark pushed 1 new commit to master: https://git.io/v2Rz2
<GitHub41>
artiq/master 82a8e81 whitequark: transforms.llvm_ir_generator: use private linkage instead of internal....