lekernel changed the topic of #m-labs to: Mixxeo, Migen, MiSoC & other M-Labs projects :: fka #milkymist :: Logs http://irclog.whitequark.org/m-labs
sh[4]rm4 has joined #m-labs
sh4rm4 has quit [Ping timeout: 260 seconds]
fengling has joined #m-labs
<sb0_>
rjo, just noticed that using them crashes the device, while using other setjmp/longjmp implementations works fine. I did not investigate much, replacing them with another implementation was straightforward and has little overhead
<sb0_>
rjo, because the llvm and gcc documentations say so ;) seems that the sjlj intrinsics (and the gcc __builtin_setjmp/longjmp) do some storage of registers on the stack instead of putting them into the jmp_buf
<sb0_>
there was no problem using gcc's __builtin_* and that small jmpbuf. but using __builtin_longjmp on a llvm.eh.sjlj.setjmp buffer freezes the device, even though the llvm docs say that they should be compatible.
Arch92 has joined #m-labs
Arch92 has quit [Remote host closed the connection]
<rjo>
sb0_: ok.
<rjo>
sb0_: i don't understand your mail about the synchronization. why would we want to pulse an rtio channel at the same time at the core and the target? i agree that that is a physical impossibility in any system.
Asa_Sawayn has joined #m-labs
<sb0_>
I'm imagining a setup where the target(s) only has a limited number of channels, i.e. only what is present in one DDS box
<sb0_>
then you have a core device that potentially has TTL lines connected directly to it, plus a number of target devices, and they all would need to be synced
<rjo>
sb0_: yes. you need one bit from each target end of a serdes link fed back to an rtio input.
<rjo>
sb0_: or fed back trough a common rx serdes link to the core.
<sb0_>
this cannot compensate the signal flight time in the synchronization line, but you can consider it negligible (a few tens of ns at most)
<rjo>
sb0_: you mean the latency e.g. from the dds end of a link to the ttl end of the other link?
<sb0_>
the latency on the cable that connects the serdes to the rtio input for feedback
<rjo>
sb0_: on that link only the skew between the different synchronization lines matters. the absolute latency is irrelevant.
<sb0_>
if everything is connected to target devices, and the synchronization lines are length matched, I agree
<sb0_>
but there are lots of IOs on the monsterboard, you might as well use some for direct TTL
<rjo>
sb0_: in any case, to compensate for latency, you have to measure it. with or without serdes. and length matching would need to be done as well.
<rjo>
sb0_: my point is that the serdes does not complicate things.
<sb0_>
what I was proposing in the email measures and compensates for the latency on the sync line
<sb0_>
you would not need to length-match that one anymore
<rjo>
sb0_: the serdes certainly does not add jitter...
<rjo>
sb0_: let me read that mail again.
<rjo>
sb0_: with "skew" you mean effectively an rx-tx latency imbalance, right?
<sb0_>
the "skew from the core RTIO to the target RTIO" (s in the equations)?
<sb0_>
that's target RTIO counter value minus core RTIO counter value at any given instant
<sb0_>
they are not the same, because the RTIO cores cannot be started at the same time due to the unknown data transfer latency
<rjo>
ah. but that then assumes symmetric rx and tx latencies. p_rx == p_tx.
<rjo>
that protocol is basically the same as ntp or ptp.
<sb0_>
that assumes symmetric latency on the synchronization line, not on the data lines
<sb0_>
the sync line is direct RTIO-to-RTIO, so this is not an unreasonable assumption
<rjo>
a bidirectional synchronization line.
<sb0_>
yes
<rjo>
then again, you can just feed all n-1 slave rtio synchronization lines to the n-th slave and ignore the common mode latency between the core/master rtio and the slaves.
<rjo>
but yes. i agree. if you have smart rtio slaves, you can measure the synchronization line lengths.
<sb0_>
yes, but that does not simplify much and removes the possibility of having direct TTL RTIO lines from the core device
<rjo>
well. it removes the need for them...
<sb0_>
the RTIO targets don't need to be that smart, you can just reuse the current core and connect it through the serial link
<sb0_>
the rest is software running on the core device
<rjo>
yes.
<rjo>
imho hooking up the monster board to the old hardware is stupid. much less time would be wasted if the existing hardware is tested with the papilio pro and the existing adapter and then later the monster board with the future link (and the future hardware).
<rjo>
it certainly does not demonstrate anything that can not be demonstrated with the papilio pro.
<sb0_>
yeah... the breakout board won't be that straightforward as it needs to be FMC...
<rjo>
yep. looking at the cern guys, it took them a few iterations to just be able to reproducibly solder the fmc connector...
<rjo>
ha! yes. nice. what will happen is a stack of four pcbs: monster, fmc-to-pin header, pin-to-idc reshuffling, idc-to-scsi adapter.
<rjo>
ah. that idc-to-scsi is not a pcb but a "strain relief cable".
<rjo>
so three pcbs plus a strain/torque relief adapter.
<rjo>
sb0_: just a quick question. you will probably know this. the spartan6 datasheet wisely deferrs to the pll coregen on "PLL Output Jitter". do you remember any ballpark values?
<rjo>
sb0_: i am too lazy to fire that thing up.
Asa_Sawayn has quit [Remote host closed the connection]
<sb0_>
it depends on the pll settings, and it's generally rather high - something like 150-200ps
<rjo>
sb0_: ack.
<rjo>
sb0_: is there a (commonly agreed upon) definition of "jitter"?
<sb0_>
you mean what statistical distribution it refers to? I'd say no...
<rjo>
distribution + which parameter of that distribution + peak vs max vs rms + highpass corner + cycle-to-cycle or relative to reference....
<rjo>
worse, i didn't find _any_ datasheet that references a full definition of the jitter.
<rjo>
... the jitter that it specifies.
<rjo>
seems to be one of the more elusive quantities in electronics.
xiangfu has quit [Remote host closed the connection]
fengling has quit [Quit: WeeChat 1.0]
MY123 has joined #m-labs
xiangfu has joined #m-labs
xiangfu has quit [Remote host closed the connection]
<GitHub190>
[ARTIQ] sbourdeauducq pushed 2 new commits to master: http://git.io/3sOshg
<GitHub190>
ARTIQ/master 37b0811 Sebastien Bourdeauducq: Turn some examples into unit tests
<GitHub190>
ARTIQ/master cf1f126 Sebastien Bourdeauducq: py2llvm/fractions: use internal linkage for gcd function
<sb0>
rjo, one advantage of the debug environment variables over command line arguments is they'd also work easily in the unit tests
<rjo>
sb0: yes. argparse is a bit clumsy. but instead of environment variables then you can just use global module-level variables. you even get namespaces for free.
MY123 has quit [Quit: Connection closed for inactivity]
mrueg has quit [Remote host closed the connection]
mrueg has joined #m-labs
MY123 has joined #m-labs
<rjo>
sb0: taking a step back i think we do need something like a "toolchain". an artiq compiler, debugger, assembler as programs.
mumptai has joined #m-labs
kilae has quit [Quit: ChatZilla 0.9.90.1 [Firefox 32.0.2/20140917194002]]