<larsc>
it tried something like that and it took forever
<lekernel>
and indexing the list doesn't? weird...
<larsc>
what I did was sum([list(reversed(list(x))) for x in zip(it, it, it, it, it, it, it, it)], [])
<lekernel>
my code is regularly slow here
<lekernel>
"fast enough" as Pythoners say
<larsc>
I guess the swapping makes sense, since the bits are of course stored in reveresed order. The first bit is the lowest bit
<larsc>
bit 9 in the data word is bit 0 in the datastream
<lekernel>
larsc, pushed new raw_dvi file without the alignment problem and with the words in the right order
<lekernel>
well if git doesn't crash
<lekernel>
wtf...
<lekernel>
done
<larsc>
what exactly did you change?
<larsc>
cause I now get lots of words where the upper 6bits are non zero
bhamilton has left #milkymist [#milkymist]
<lekernel>
I aligned the DMA buffer on a memory word boundary (forgot to do it - stupid mistake) which was causing the messed up words at the beginning (which you worked around with [4:-4])
<lekernel>
and put the 16-bit words in order in the 128-bit word (ISE says timing is not met, but it appears to work anyway)
<lekernel>
hmm yes, the upper 6 bits are funny
<lekernel>
maybe coming from the timing issue :) let's keep the reversed bits then...
<lekernel>
s/reversed bits/reversed words
<lekernel>
ah, the joys of FPGA debugging
<larsc>
well it seems to work if I just ignore them
<lekernel>
yes, same here
<lekernel>
also the character sync is stable at 9
<lekernel>
without having to use a large value for the counter threshold
<larsc>
actually it's the lower 6 bits we need to ignore
<lekernel>
let's use a design that meets timing and try to avoid having too many loose ends ...
<lekernel>
correction: the character sync is still unstable ...
<lekernel>
it looks as if the 8b10b decoding is wrong... let's try with a color gradient
<larsc>
the one on hw?
<lekernel>
both I think
<lekernel>
they're almost the same code. but here the hw doesn't do any decoding, so we're not affected by those hw bugs
<lekernel>
data from the red channel is dumped directly into DRAM after phase detection (which just adjusts the IO timing and doesn't do character boundary detection or anything)
<larsc>
but the image looks more or less fine doesn't it?
<lekernel>
no, it doesn't... look eg at the antialiased text
<lekernel>
it should be a progressive color changes, but here there are spikes everytime
<lekernel>
a full gradient should show this problem very well
<lekernel>
ah, actually sync is on blue :)
<lekernel>
not red
<lekernel>
and we're sampling the *blue* channel right now
<lekernel>
and the gradient is totally messed, just as I expected
<lekernel>
p.13 "The 9th bit indicates no encoding is required to minimize transitions, as there are no transitions between each bit [Figure 12]. "
<lekernel>
so there are cases when TMDS does not change the original bits? at first sight, it seems to me the decoding algo from the DVI spec always changes the original data
<larsc>
if the 9 bit is set you invert bits 7 to 0
<lekernel>
that's the DC balance bit
<lekernel>
but the second step which actually minimizes transitions is controlled by bit 8
<lekernel>
according to the DVI spec this switches between XOR and XNOR
<larsc>
yes
<lekernel>
according to this silicon image paper, this switches between no-change and some encoding