sb0 changed the topic of #m-labs to: ARTIQ, Migen, MiSoC, Mixxeo & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
fengling_ has joined #m-labs
fengling_ has quit [Ping timeout: 250 seconds]
<ccube> hi! :) can anyone explain to me, what the LiteUSB module can be used for?
sh4rm4 has quit [Remote host closed the connection]
sh4rm4 has joined #m-labs
mumptai_ has quit [Ping timeout: 264 seconds]
kristianpaul has quit [Ping timeout: 248 seconds]
kristianpaul has joined #m-labs
kristianpaul has quit [Ping timeout: 256 seconds]
mumptai_ has joined #m-labs
fengling_ has joined #m-labs
sh[4]rm4 has joined #m-labs
sh4rm4 has quit [Ping timeout: 265 seconds]
fengling_ has quit [Ping timeout: 265 seconds]
<rjo> whitequark: () is not a syntax error.
<rjo> i gotta say that i have rarely ever found python syntax errors to be overly terse or cryptic. the syntax is very simple after all and spotting and fixing the problem takes much less time than even reading the error message.
fengling_ has joined #m-labs
sh4rm4 has joined #m-labs
sh[4]rm4 has quit [Ping timeout: 265 seconds]
antgreen` has quit [Ping timeout: 248 seconds]
kristianpaul has joined #m-labs
kristianpaul has quit [Client Quit]
kristianpaul has joined #m-labs
kristianpaul has joined #m-labs
<GitHub184> [artiq] jordens pushed 9 new commits to master: http://git.io/vendI
<GitHub184> artiq/master 16ff190 Robert Jordens: pdq2: cleanup unittest
<GitHub184> artiq/master d165358 Robert Jordens: pdq2: spelling fix
<GitHub184> artiq/master e50661d Robert Jordens: pipistrello: fix dcm parameters, move leds, fix names
<GitHub90> [artiq] jordens pushed 1 new commit to master: http://git.io/vendM
<GitHub90> artiq/master ef375b5 Robert Jordens: pipistrello: add double-cpu
<sb0> rjo, you set the photon_histogram permissions to 755 but it does not have a shebang line
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#84 (master - c98e24a : Robert Jordens): The build was broken.
travis-ci has left #m-labs [#m-labs]
fengling_ has quit [Ping timeout: 264 seconds]
<sb0> rjo, also I wouldn't support executing experiments directly, as doing it consistently adds boilerplate to every file
fengling_ has joined #m-labs
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#85 (master - ef375b5 : Robert Jordens): The build was broken.
travis-ci has left #m-labs [#m-labs]
<GitHub30> [artiq] jordens pushed 1 new commit to master: http://git.io/venbF
<GitHub30> artiq/master fbedb7c Robert Jordens: photon_histogram: remove +x permissions, add units to parameter defs
<rjo> sb0: i would not make all experiments executable with this. but this is a convenient thing where applicable.
fengling_ has quit [Quit: WeeChat 1.0]
<GitHub97> [artiq] jordens pushed 1 new commit to master: http://git.io/venNW
<GitHub97> artiq/master 0ec7e9a Robert Jordens: artiq_run: fix get_argparser()
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#86 (master - fbedb7c : Robert Jordens): The build is still failing.
travis-ci has left #m-labs [#m-labs]
<rjo> ysionneau: are you paying for bandwidth? ;)
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#87 (master - 0ec7e9a : Robert Jordens): The build was fixed.
travis-ci has left #m-labs [#m-labs]
<rjo> sb0: that pattern in worker_impl is broken afaict. if you get an exception during that inner close_devices(), you will try to close the devices again and the cleanup action will fail while it doesn't need to.
<sb0> well I don't expect the device closes to fail very often
<sb0> it's usually opening them which is problematic. and if a close exception does happen, having it twice isn't a severe bug.
<rjo> but it won't close the other devices once it has failed on one.
<rjo> conceptually there should be something that does cleanup when an experiment fails. DBHub sounds like the right place.
<rjo> if not, i could have just removed the try: finally:
<rjo> re pdq2: silence should not be inferred at the driver level.
<rjo> but we should move it up to the same level as "bias" and "dds" since it applies to the entire channel.
<sb0> what's wrong with doing it at the driver level?
<rjo> there might be glitches, differences in crosstalk between the channels
<rjo> distotion if you turn it on again.
<sb0> the logic is simple: if all coefficients are zero, then set silence to True
<sb0> distortion? why?
<rjo> dac temperature will change a lot.
<rjo> and having that cropping up as a sideffect of continuously lowering the scale of a signal is not nice. once you hit 0, your dac cools down a lot, the offset might drift. could even affect the resistors in the proximity.
sh[4]rm4 has joined #m-labs
sh4rm4 has quit [Ping timeout: 265 seconds]
<rjo> sb0: for bitstream naming, we are going to have "target_subtarget_platform" now?
<rjo> that would then be concise: artiq_up_ppro artiq_amp_pipistrello artiq_amp_kc705.
<sb0> the problem is the current artiq targets already contain the platform name
sj_mackenzie has joined #m-labs
<sb0> if we do target_subtarget yes you'd get those names
<sb0> or almost, e.g. artiq_ppro_up
<rjo> but misoc appends the platform again.
<sb0> the current naming scheme is subtarget_platform
<sb0> the idea was that 1) subtargets would have specific names that likely won't conflict 2) having the platform name in the bitstream helps ensure you are not loading the wrong one
<sb0> #1 doesn't hold for artiq
<rjo> yes. but 2) is worthwhile keeping.
<rjo> would there be a problem with changing misoc to do target_subtarget_platform (modulename_classname_platformname)
<sb0> no, just that the artiq bitstreams will be called artiq_kc705_amp_kc705
<sb0> or we can rename the artiq targets to artiq_mini, artiq_monster, etc.
<rjo> we can have target=artiq and subtargets up and amp
<sb0> yes
<sb0> but that target will need to deal with a bunch of different platforms, likely that won't scale
<sb0> but it might be good for the next few years
<rjo> the the namespacing can be done with imports for it.
<sb0> note that factory functions are generally a bad idea, as they break inheritance
<rjo> ack. we can just go for the present thing and keep subtarget_platform. might be simpler. we would still get artiq_up_ppro which is readable.
<sb0> rjo, regarding device closes. we can suppress exceptions in the body of this loop https://github.com/m-labs/artiq/blob/master/artiq/master/worker_db.py#L111
<sb0> and log a warning if an exception happens
<sb0> then unless the program is really messed up, DBHub.close_devices will not raise exceptions
<rjo> it is still a bit asymmetric. the opening is all triggered somewhere else but the closing is done centrally in DBHub.
<sb0> what's wrong with that?
<rjo> we might want to transfer this cleanup resposibility into Experiment.run()
<rjo> the open() is triggered outside DBHub but the clos()ing is done within.
<sb0> this adds boilerplate
<sb0> as you'd typically need to open and close the core device manually
<sb0> also this breaks @kernel def run
<rjo> could be moved into DBKeys.
<rjo> that is where the opening is triggered.
<sb0> you also have to think that subexperiments might also use the same device
<rjo> if DBKeys has the responsibility of opening stuff it should also be able to clean up after itself.
<sb0> and you don't want to keep opening and closing it
<rjo> yes. that's why you do the memoizing.
<sb0> yes
<rjo> but then you are doing a slightly weird garbage collector in close_devices()...
<sb0> DBKeys doesn't request the devices, AutoDB does
<sb0> but again, every subexperiment is an AutoDB instance, so closing in AutoDB is problematic
<rjo> yep. lets think forward. with Scheduler.suspend() you also have to close devices, right?
<sb0> during yields?
<rjo> yes
<sb0> no, you don't have to. opening a device multiple times isn't problematic, other than wasting resources
<rjo> (we can't call it yield though)
<rjo> ok. then why is Worker closing devices after every experiment?
<sb0> right now it's problematic in comm_serial, but I still need to make it open the serial port only while a kernel is running - which I plan to do during the refactoring that I need to share code with the future comm_tcp
<rjo> ok. if this is only for comm_serial, then that close_devices() call inside the loop will disappear.
<sb0> 1) so it'll bomb right away if the user tries to access a device in analyze() 2) it's a slight optimization
<rjo> do we need to prevent 1) here? isn't the same problem in build()/__init__()?
<rjo> optimization for memory?
<sb0> yes, things like closing tcp connections earlier
<sb0> yes there's the same problem in build/__init__
<sb0> and it's an issue right now actually, because __init__ for rtio runs kernels that set OE
<rjo> it is an issue for the scheduler three-step pipeline, right?
<sb0> yes
<rjo> but i don't see how we can prevent the user from doing that.
<rjo> we just need to tell them that they can use a device only in one stage across all experiments.
<sb0> it sounds difficult to prevent it in build/__init__, yes
<rjo> and if they suspend(), they need to figure it out as well.
<rjo> but it would be just fine if they only ever use a device in analyze() and we don't need to prevent that at all.
<sb0> alternatively, we can allow hardware access in build/__init__, and limit the first pipeline stage to process creation and imports
<sb0> if no other experiment uses it, yes...
<rjo> yes. i would leave that to the user.
<sb0> ok, so we remove that extra close_devices?
<rjo> yep.
<rjo> once that comm_serial is fixed, right?
<sb0> it's not an issue right now as there is no pipelining yet
<sb0> my plan is dual-CPU with ethernet, then RTIO-bus, then pipelining
<rjo> in the longer run, we could enforce this single-pipeline-stage use with DBHub/AutoDB returning wrapped devices based on the pipeline stage and the device would keep the last used stage and bomb if it changes.
<rjo> ok. that should play nicely with the delays in the GUI specification.
<rjo> ;)
<rjo> at least i worked out a reasonable way to generate/manage/keep/store/distribute the UI state with Ting Rei today.
<rjo> we have precedence rules how the state is retreived/generated and we will want a sdb "state db" or "uidb". he is writing it down.
<sb0> rjo, the main problem right now re. device access in build/__init__ are the OE-kernels from the rtio driver
<rjo> ok. about the target refactoring: are you doing that? then i will focus on the wavesynth/pdq2 stuff.
<sb0> ok, I can do that
<sb0> can you implement silence in compute_samples? :)
<rjo> ack. will do.
<sb0> rjo, I think we can just limit the first pipeline stage to process creation and imports. my impression is this is what takes the most time anyway.
<rjo> btw: how robust is the comm_serial resynchronization supposed to be right now? if you accidentally talk to the uart or if you reboot the soc while the thing is running?
<rjo> sb0: na. i want to use it for expensive computation of transport waveforms.
<rjo> i need build().
<sb0> it breaks. I think most devices will use ethernet anyway...
<sb0> hmm, so we need hwinit() I guess
<rjo> sure. but would a robust resynchronization not be doable? both sides answering crap with magic until they find each other?
<rjo> hwinit() for the oe kernels?
<sb0> maybe
<sb0> yes
<rjo> you mean build(), hwinit(), run(), analyze()?
<sb0> yes
<rjo> lemme look at the code.
<rjo> btw. is the counter 63 bit wide for wrap around handling?
<sb0> no, that's because 'now' is signed, and the sign bit doesn't make sense
<sb0> it'd probably work using the sign bit though, as you are only supposed to do additions and subtractions on 'now' ...
<rjo> ok. smells like hwinit() (or arm() ?). but there is no infrastructure to walk the DBKeys/Experiment hierarchy and call all hwinit()s, right?
<sb0> no, there isn't
<rjo> ok. with 64 bits there is also no risk of a wrap around. 600 years should be enough.
<rjo> ... but that is probably the same statement that the inventors of the unix epoch and others made
<rjo> re set_oe: we can either 1) do set_oe on every _set_value() and with a bit of state let the compiler cancel them out. or 2) assume that there is a setup experiment where this (and dds setup, input setup) is done explicitly.
<rjo> that could well be the default experiment.
<sb0> the one in flash?
<rjo> hmm. the fact that the counter is so wide is nice. we could have it in actual TAI for the infinitely-long running experiments.
<rjo> yes.
<sb0> temps atomique international?
<rjo> yep. that TAI counter would play nicely with many things. whiterabbit (i hope they use tai) post-mortems with drtio...
<rjo> oh by the way. i noticed a few localtime()s in there. that is a bit dangerous with daylight savings time, leap seconds, people's laptops in different timezones etc.
<rjo> we should strive to keep all representations of time in tai (or at least utc).
<sb0> are they used for anything else than printing the date to the user?
<rjo> start_time in worker for example is saved int he hdf5
<rjo> but in the scheduler it looks ok and the list-syncher/gui stuff is only for showing. thats fine.
<rjo> ah. its the file name.
<rjo> that is fine as well. but time() should go into the hdf5 as well and that should be used if people want to locate an experiment in time in their scripts.
<rjo> and do you want to remove the context manager feature in DBHub if you remove the close_devices()?
sh4rm4 has joined #m-labs
<sb0> most things in artiq don't use context managers (because they don't play nice with asyncio)... so I'd remove it, yes.
sh[4]rm4 has quit [Ping timeout: 265 seconds]
<sb0> also, it makes it clearer what happens during the cleanup
sh4rm4 has quit [Remote host closed the connection]
sh4rm4 has joined #m-labs
<sb0> do we also turn close exceptions into warnings in the log on a per-device basis?
sh[4]rm4 has joined #m-labs
<rjo> ok. sounds good.
<rjo> i would'nt. i would only ignore exceptions during the close_devices() cleanup. they can be info(). everything else can stay an exception.
sh4rm4 has quit [Ping timeout: 265 seconds]
<sb0> I'm talking about turning exceptions into warning in close_devices
<sb0> by 'on a per-device basis' I meant it keeps closing devices even when an exception occurs
<sb0> (closing the other devices)
<rjo> if we only close device on worker termination?
<sb0> yes
<rjo> i would expect to have < 50 tcp connections open for that.
<rjo> that should not be a huge memory problem.
<rjo> then the exceptions can be silent in close_devices if it is only used at worker termination.
<rjo> that is info()
<sb0> they should still be warning
<sb0> otherwise, people may write grossly broken close device functions and that will be silently ignored
<rjo> ok. but yes. continue closing devices if one fails.
sh[4]rm4 has quit [Ping timeout: 265 seconds]
sh4rm4 has joined #m-labs
<rjo> why can't a line be both jump and wait_trigger?
<rjo> sb0: ^
sh4rm4 has quit [Remote host closed the connection]
sh4rm4 has joined #m-labs
<sb0> rjo, I don't see where this would be useful and that's almost certainly an error
<sb0> it waits for a trigger at the end of each line with wait_trigger = True *and* before exiting the jump table
<sb0> so what that would mean is: wait for trigger high, read jump table, wait for trigger high (again)
<rjo> wait_trigger triggers the completion of a line and the reading of the next (or the jumping through the table).
<rjo> if you "trigger" the first line after jumping through the table, you have already jumped.
<sb0> I prefer that you always wait for a trigger before exiting the jump table, because it points out that the select lines must be valid then
<sb0> also you don't want the pdq to start playing right after programming
<rjo> but we can wait for a trigger before entering the table.
<rjo> programming is interlocked with cmd("START", False); program(); cmd("START" True)
<sb0> before/after entering the table should be just a matter of a few clock cycles :)
<rjo> yes.
<sb0> and it still doesn't make sense to have wait_trigger + jump in a line
<rjo> the features are there just different semantics
<rjo> why?
<rjo> this is exactly what you want. pdq2 waits, gets trigger, jumps through table, does stuff.
<rjo> what you don't want is a jump before the end of a frame. that is unreachable lines.
<sb0> because you are already waiting for a trigger before reading the jump table
<rjo> that is the trigger.
<sb0> and that trigger requires valid select lines, unlike the end-of-line trigger
<sb0> so jump implies a different type of trigger
<rjo> implies a different action after this line. and "after this line" means: once the duration has passed and the trigger is high.
<GitHub177> [migen] sbourdeauducq pushed 1 new commit to master: http://git.io/vecYg
<GitHub177> migen/master f26ad97 Robert Jordens: decorators: fix ControlInserter
<sb0> there are two types of trigger, sharing a single physical line.
<sb0> that sharing doesn't have to propagate to the upper level layers
<sb0> maybe "jump" can be renamed "end_of_frame"
<rjo> yes. "trigger" and "wait_trigger" (formerly know as "wait")
<sb0> no, one that selects a frame, and another one that plays the next segment
<rjo> ok. what we can do is always bundle eof wait (jump+wait_trigger) and otherwise use the beginning-trigger.
<rjo> it doesnt select a frame, it just advances the state machine the state machine then selects a new frame because jump was also given. and depending on whether the first line of the new frame has "trigger", it might wait again before executing that line.
<sb0> there is only wait-at-end-of-things in my implementation
<sb0> when you are at the jump table, trigger does select a frame
<sb0> (and this requires the select signals to be valid)
<rjo> the frame-select lines select the frame.
<rjo> wait_trigger can trigger the selection.
<sb0> i'd make the frame select lines synchronous to trigger
<sb0> supporting frame select without trigger just makes things more complicated and does not add features
<rjo> what do you mean by synchronous here? zero setup time wrt trigger?
<sb0> I mean selecting a frame requires: 1) being in the "at the jump table" state 2) sending valid select 3) bringing trigger high
<rjo> yes. that's why we can bundle jump with wait_trigger always and use "trigger" otherwise.
<sb0> in my opinion, jump always implies a different type of trigger
<rjo> i am saying the same: 1) having a frame that will cause a jump and wait for trigger before doing so 2) set selects 3) trigger
<sb0> that 1) requires select lines 2) prevents immediate playback right after programming
<sb0> you already need a trigger at all times to leave the "at the jump table" state (otherwise it'll start playing right after programming)
<rjo> yes (if i understand you correctly). is that a problem? we generally want to always leave the pdq2 in some wait_for_trigger state. already for synchronization.
<sb0> no, that's not a problem. but adding another trigger at the end of a line that goes to the jump table is redundant.
<rjo> in the gateware not really because there are free bits in the header. in the driver depending on the application. in wavesynth yes.
<rjo> that is why i would like to hide wait_trigger and have it injected automatically for each jump.
sh4rm4 has quit [Remote host closed the connection]
sh4rm4 has joined #m-labs
sh4rm4 has quit [Remote host closed the connection]
<sb0> so maybe we should not have two booleans
<sb0> but a string: "next"/"wait_trigger"/"jump"
sh4rm4 has joined #m-labs
<rjo> you still need the trigger to step between subsequent segments of the same frame.
<rjo> those are only mutually exclusive if the segment is longer than one line.
<sb0> yes, set that string to "wait_trigger" on the last line of the segment, and to "next" within a segment
<sb0> "next" could also be called "continue"
<sb0> the string tells the pdq what to do after the line playback. it could be called "after"
<rjo> if you have a frame with two segments, each containing one line it would be: [[["trigger"]], ["trigger", "eof"]]] which would get converted to [[[], ["trigger", "wait", "jump"]]] (in pdq2 gateware terms).
<rjo> and "eof" can actually be implicit.
sh4rm4 has quit [Ping timeout: 265 seconds]
<rjo> let's implement it with a single boolean ("trigger"). then we can see whether we need anything else.
<sb0> ok
<sb0> i'll still argue that the last line in the frame doesn't need trigger ;)
<rjo> what if the frame consists of one segment with a single line? when do you trigger?
sh4rm4 has joined #m-labs
sh4rm4 has quit [Remote host closed the connection]
sh4rm4 has joined #m-labs
<sb0> you send one trigger pulse to leave the jump table playback that segment/line, and then it automatically goes back to the jump table and waits for trigger high again (you must release trigger during playback)
sh4rm4 has quit [Ping timeout: 265 seconds]
sh4rm4 has joined #m-labs
<rjo> i don't see how claiming "last line doesn't need trigger" goes along with ".. waits for trigger high again" ;)
<rjo> but i suspect we are talking about the same thing.
sh4rm4 has quit [Ping timeout: 265 seconds]
<rjo> i'll just change the semantics of the "program" to mean trigger in the usual sense, that is "before something happens, trigger needs to be high".
<sb0> then you have the same problem with waiting for a trigger on the first line of a segment
<sb0> since trigger already needs to be high to leave the jump table and select a segment, and then high again to select that segment
<sb0> er, I meant: "high again to start playing the segment"
sh4rm4 has joined #m-labs
<ysionneau> 05:27 < rjo> ysionneau: are you paying for bandwidth? ;) < on my server? nop :)
<rjo> sb0: yes. you have to demand that the first line in the first segment of a frame be triggered (in the new parlance).
<rjo> ysionneau: ok. looked like 1GB/day average on www.phys.ethz.ch.
<sb0> rjo, but you already have selected that frame (which requires trigger high) when you look at the wait bit of its first line
<sb0> can't ISE stay on the travis VM?
<whitequark> rjo: yeah, () is not a syntax error, it was a test grammar and not actual python one
<GitHub32> [artiq] sbourdeauducq pushed 3 new commits to master: http://git.io/vec1h
<GitHub32> artiq/master 9b46bc6 Sebastien Bourdeauducq: dbhub: do not use as context manager, turn close exceptions into warnings, do not close devices early in worker
<GitHub32> artiq/master 71b7fe3 Sebastien Bourdeauducq: worker_impl: add missing import
<GitHub32> artiq/master 1bca614 Sebastien Bourdeauducq: runtime: use UP/AMP terminology
<rjo> sb0: hmm. i'll push my stuff. looks much simpler to me now. let me know what you think and whether this addresses your concern.
<rjo> that build will fail ;)
<rjo> travis does not have easy asset caching for this.
<sb0> what build will fail? did i break something?
<GitHub6> [artiq] jordens pushed 4 new commits to master: http://git.io/vecMP
<GitHub6> artiq/master 051b01f Robert Jordens: wavesynth: refactor testing code
<GitHub6> artiq/master 1f54534 Robert Jordens: wavesynth: implement silence, add defaults, fix bias
<GitHub6> artiq/master e870b27 Robert Jordens: wavesynth: new semantics, fix compensation...
<rjo> the unittests fail here. womething about the worker. lemme check
<rjo> get_logger vs getLogger.
<sb0> ah, damn this thing
<rjo> yep. some log4j guy probably managed to sneak his weird camelCase stuff into logging...
<sb0> nevertheless the tests pass
<GitHub159> [artiq] sbourdeauducq pushed 1 new commit to master: http://git.io/vecD6
<GitHub159> artiq/master 3257275 Sebastien Bourdeauducq: worker_db: get_logger -> getLogger
<sb0> looks like one of those ugly race conditions that python has with signals. it doesn't do that on my other machine.
<rjo> neither on mine. but travis started to see something a while ago on test cleanup (not fatal)
<sb0> it would be nice if the python folks realized that async exceptions are a bad idea
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#88 (master - 1bca614 : Sebastien Bourdeauducq): The build was broken.
travis-ci has left #m-labs [#m-labs]
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#89 (master - 1d5f467 : Robert Jordens): The build was broken.
travis-ci has left #m-labs [#m-labs]
<sb0> what does discrete_compensate do?
<sb0> I would completely remove the unit test stuff from artiq/wavesynth/compute_samples.py
<rjo> converts continuous time b-spline derivative coefficients to discrete time.
<sb0> is that generic or is it hardcoded to the pdq fixed-point representation?
<rjo> ok. that plotting thing is nice though.
<rjo> it is intrinsic to the accumulator based cascaded discrete time interpolator.
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#90 (master - 3257275 : Sebastien Bourdeauducq): The build passed.
travis-ci has left #m-labs [#m-labs]
<sb0> you shouldn't both mutate c and return it
<rjo> if we don't like that, we need to move that function up the stack (into inteprolate.py) and define that wavesynth programs take precompensated coefficients for discrete time interpolation.
<sb0> will the pdq driver also use it?
<sb0> I guess so? so it should probably go to interpolate.py so it is easily shared between pdq and wavesynth
<rjo> yes. it does. in driver.py
<sb0> also it doesn't work for order > 4, right?
<rjo> the problem is then that you don't get the correct result if you just do a+b*t+1/2*c*x**2... based on the coefficients=[a,b,c]
<sb0> put it in interpolate (maybe find a better name for interpolate) and import from compute_samples and driver?
sh[4]rm4 has joined #m-labs
<sb0> rjo, interpolate -> coefmath?
<rjo> ack. interpolate will receive some attention.
<sb0> coef_math
<rjo> what does that mean?
<rjo> coefficient math?
<sb0> yes
<rjo> hmm. but it also generates thos coefficients...
sh4rm4 has quit [Ping timeout: 265 seconds]
<GitHub174> [artiq] jordens pushed 2 new commits to master: http://git.io/vec5U
<GitHub174> artiq/master 75dfa95 Robert Jordens: wavesynth: move test code to unittests, fix mutability style
<GitHub174> artiq/master 9fd4594 Robert Jordens: interpolate: refactor discrete_compensate
<rjo> just "coefficients"?
<sb0> rjo, shouldn't discrete_compensate raise NotImplementedError if len(c) > 4?
<sb0> ok
<rjo> yeah. but that is not the right point to check that. should be much earlier when generating the coefficients.
<sb0> I'm just thinking about someone using that function somewhere else and silently getting an incorrect result
<sb0> discrete_compensate(self.c[1:]) won't work
<rjo> ack. fixing.
<GitHub197> [artiq] sbourdeauducq pushed 1 new commit to master: http://git.io/vecdP
<GitHub197> artiq/master 0bab73e Sebastien Bourdeauducq: wavesynth/compute_samples: fix list mutation bug
<sb0> it doesn't matter here, but I wonder how you'd do it without a memory copy
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#91 (master - 9fd4594 : Robert Jordens): The build passed.
travis-ci has left #m-labs [#m-labs]
<rjo> yep. you could do array() or memoryview...
<GitHub127> [artiq] jordens pushed 1 new commit to master: http://git.io/vecFZ
<GitHub127> artiq/master 7ea9250 Robert Jordens: wavesynth: interpolate->coefficients
<rjo> ok. done for today. good night!
<sb0> gn8!
<sb0> /morning
<GitHub64> [migen] sbourdeauducq pushed 2 new commits to master: http://git.io/vecFN
<GitHub64> migen/master 8798ee8 Robert Jordens: decorators: fix stacklevel, export in std
<GitHub64> migen/master 25e4d2a Robert Jordens: decorators: remove deprecated semantics
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#92 (master - 0bab73e : Sebastien Bourdeauducq): The build passed.
travis-ci has left #m-labs [#m-labs]
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#93 (master - 7ea9250 : Robert Jordens): The build passed.
travis-ci has left #m-labs [#m-labs]
<GitHub96> [artiq] sbourdeauducq pushed 2 new commits to master: http://git.io/veCrm
<GitHub96> artiq/master 0c62f0f Sebastien Bourdeauducq: runtime: remove generated service_table.h
<GitHub96> artiq/master 72f9f7e Sebastien Bourdeauducq: runtime: implement mailbox, use it for kernel startup, exceptions and termination
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#94 (master - 0c62f0f : Sebastien Bourdeauducq): The build has errored.
travis-ci has left #m-labs [#m-labs]
<GitHub197> [artiq] sbourdeauducq pushed 1 new commit to master: http://git.io/veC6g
<GitHub197> artiq/master f26c53c Sebastien Bourdeauducq: runtime: use KERNELCPU_PAYLOAD_ADDRESS on UP
travis-ci has joined #m-labs
<travis-ci> m-labs/artiq#95 (master - f26c53c : Sebastien Bourdeauducq): The build passed.
travis-ci has left #m-labs [#m-labs]
sh[4]rm4 has quit [Ping timeout: 265 seconds]
sh4rm4 has joined #m-labs
sh4rm4 has quit [Remote host closed the connection]
sh[4]rm4 has joined #m-labs
<sb0> this thing http://www.bristol.ac.uk/physics/research/quantum/qcloud/ looks rather bogus to me
<sb0> it would work just the same with pulses of classical waves, you don't need photons at all
<sb0> (except that the beamsplitters would need to send the whole pulse on the same side)
<sb0> all their "entanglement" is done with postselection tricks
FabM has joined #m-labs
rofl__ has joined #m-labs
sh[4]rm4 has quit [Remote host closed the connection]
FabM has quit [Ping timeout: 264 seconds]
FabM has joined #m-labs
FabM has quit [Remote host closed the connection]
<whitequark> postselection?
hozer has quit [Remote host closed the connection]
sj_mackenzie has quit [Remote host closed the connection]
siruf has quit [Ping timeout: 272 seconds]
siruf has joined #m-labs
sh[4]rm4 has joined #m-labs
rofl__ has quit [Ping timeout: 265 seconds]
<sb0> whitequark, conveniently ignore 90% of the results
<whitequark> like... cherry-pick results that appear to fit the expected process?
<sb0> and to make it work with classical light: actually the beamsplitters should be normal, but you just threshold at the detectors which is even easier
<sb0> it's a neat photonics chip, but a quantum processor? nope
<sb0> whitequark, almost