<GitHub72>
misoc/master 1549956 Sebastien Bourdeauducq: cpuif: add memories to csr.h
<lekernel>
of course, the nvidia driver and/or the x11 shitware ignores modes described using new edid methods when they are not also described with the old ones
<lekernel>
e.g. if you describe 1024x768@60Hz using a "detailed timing", the linux graphics shitware stack will refuse to show it unless the bit is also set in "established timings". modes that cannot be set using established timings, e.g. 720p, work fine though.
<lekernel>
I hope it doesn't get worse than that ...
<larsc>
is that the closed source driver?
<lekernel>
yes
<larsc>
iirc the DRM framework as no such restriction
<lekernel>
I wonder what sort of software architecture can lead to such a behaviour
<larsc>
it just collects every possible mode from all sources
<larsc>
maybe it's in the EDID spec or something
<larsc>
but yea, I wanted to write a tool which you pass in a set modes and which spits out a valid edid with the modes sets in all the possible sources
<lekernel>
all 6, including the two GTF variants? :-)
<larsc>
well, that's the plan
<lekernel>
btw - is there a good algorithm for finding the best approximation x/y of a fraction X/Y with a < x < b and c < y < d?
<larsc>
brute force ;)
<lekernel>
yeah, that could work - I have a pretty small range
<larsc>
if b-a is small enough
<lekernel>
but it's ugly :)
<lekernel>
there could be a reason for the nvidia driver behaviour
<lekernel>
it supports GPU scaling of established timing resolutions to higher resolutions
<lekernel>
eg if you set up edid with 720p, it'll still show 640x480 and 800x600 in nvidia-settings (but not in xrandr) and scale that up
<lekernel>
I guess it does the determination of the "scaled" modes using the edid established timing bits
<lekernel>
and the driver gets confused when a established timing mode is set as a detailed timing ...
<larsc>
hm, everybody seems to be using grey counters for async fifos, but that will fail if the faster domain does reads/writes in bursts, right?
<larsc>
but the reason why a grey counter works is because only one bit is changed at a time, but if the source domain counts faster than the target domain it is possible that more than one bit changes
<larsc>
e.g. if I have grey 0 = (000), increment two time, which is 011. Then the target domain might see 010, which is 3
<lekernel>
there's a trick that adds one bit to the counters and makes everything work, but it's been a long time since I coded it and I don't remember the details
<larsc>
I found another paper from the same author which explains why it works fine
<larsc>
In my mental model I assume that the counter just jumps from 000 to 011, which could cause problems if bit 2 is sampled before bit 3
<larsc>
but of course the counter is 010 inbetween
<larsc>
001
<larsc>
I mean
<larsc>
which means that the one of bit 3 has already propagated
<larsc>
the extra bit is just for full and empty checking, as far as I understand
<larsc>
I think the "Grey counter style #2" FIFO from that paper is what you implemented in migen
<lekernel>
barmstrong, yes, chisel is quite similar fundamentally
<barmstrong>
i think it's interesting that it does its own simulation. i feel like that'd look a lot like icarus
<lekernel>
what do you mean, own simulation?
<GitHub52>
[misoc] sbourdeauducq pushed 2 new commits to master: http://git.io/N1638Q
<GitHub52>
misoc/master c8da400 Sebastien Bourdeauducq: videomixer: compute best m/d value for pixel clock synthesizer
<GitHub52>
misoc/master 132b6ce Sebastien Bourdeauducq: videomixer: set established timing bits in EDID
<larsc>
lekernel: are you using the mmcm block?
<lekernel>
dcm_clkgen
<lekernel>
one of the rare things that the s6 got right. the serial interface is a little annoying, but I'm just bickering.