<alyssa>
^ Not quite there yet, but I think getting there!
<chrisf>
is looking pretty good
nerdboy has quit [Ping timeout: 248 seconds]
stikonas has quit [Remote host closed the connection]
lmcloughlin has quit [Quit: Connection closed for inactivity]
<alyssa>
Over dinner I had the realization that the #1 problem by far is just predicting if there could possibly be any spilling -- we're not terribly interested in gradiations of spilling
<alyssa>
So that means we don't need to guess the cost of spilling, it's more of a boolean thing
<alyssa>
(Or even better - a probabiliy that a program will spill)
<alyssa>
Unfortunately, while I barely know enough calculus for the boolean heuristic, I definitely don't know enough statistics for a probabilistic approach. Alas.
robmur01_ has quit [Ping timeout: 260 seconds]
<alyssa>
That's a very good thing, because modeling spilling correctly is expensive, and approximating involves a lot of guesswork
Stenzek has quit [Ping timeout: 265 seconds]
Stenzek has joined #panfrost
<alyssa>
So the next question is how do we *accurately* predict whether we will spill <===> the maximum register pressure
<alyssa>
In theorry liveness analysis ought to do that
<alyssa>
In practice liveness analysis doesn't quite capture everything, because of vector registers (SIMD :V), pipeline registers (sometimes live things need not spill), non-work registers, and spilled non-work registers
<alyssa>
A simple live channel counting algorithm will underestimate, overestimate, overestimate, N/A respectively
<alyssa>
But an overestimate should be okay, since that will bias towards reducing spilling instead of reducing UBO traffic
nerdboy has joined #panfrost
<alyssa>
--Indeed. If instead of counting channels you count entire vec4s (even for just a scalar), and then pay *only* attention to spilling with no regard for the UBO stuff (since that will naturally sort it outself out at the moment), again ignoring threading effects for now...
<alyssa>
(99.9% reduction in spilling, which is what we're after)
* alyssa
is now experimenting with a threading heuristic as well
<alyssa>
Doing one well however ... may not be strictly simple ....
<alyssa>
:f
nerdboy has quit [Ping timeout: 246 seconds]
<icecream95>
Interesting... I tried darkplaces and it has the same flickering problems that quakespasm has. Xonotic (which uses darkplaces) doesn't, though.
<alyssa>
icecream95: What is the flickering problem exactly/
<icecream95>
There is an issue about it on Mesa gitlab
<alyssa>
IIRC I couldn't see anything obviously wrong on the apitrace but maybe I'm mixing it up with a different issue
<icecream95>
The apitrace trace for the other issue about quakespasm I made (textures missing) didn't show the flickering problem.
<bbrezillon>
I just resumed working on the vk implem, and knowing descriptor sizes is important (things are allocated from pools, and we need to reserve memory ahead of time)
<bbrezillon>
this patch is not mandatory of course, but it makes job desc size calculation much easier
stikonas has joined #panfrost
abordado has joined #panfrost
abordado has quit [Remote host closed the connection]
abordado has joined #panfrost
abordado has quit [Quit: Leaving]
abordado has joined #panfrost
abordado has quit [Remote host closed the connection]
abordado has joined #panfrost
davidlt has quit [Ping timeout: 268 seconds]
abordado has quit [Ping timeout: 245 seconds]
abordado has joined #panfrost
abordado has quit [Ping timeout: 248 seconds]
abordado has joined #panfrost
abordado has quit [Ping timeout: 248 seconds]
flacks has quit [Ping timeout: 250 seconds]
TheCycoONE1 has quit [Ping timeout: 246 seconds]
EmilKarlson has quit [Ping timeout: 245 seconds]
thefloweringash has quit [Ping timeout: 246 seconds]
tgall_foo has quit [Ping timeout: 265 seconds]
youcai has joined #panfrost
youcai has left #panfrost [#panfrost]
davidlt has joined #panfrost
CrystalGamma has joined #panfrost
tgall_foo has joined #panfrost
<alyssa>
bbrezillon: NAK. We don't use the next_job_32 fields on any platform to avoid duplicating code paths, and the blob doesn't use next_job_32 on 64-bit platforms (so as long as we have an aarch64 board+blob for a given hw, we can get dumps)
megi has quit [Ping timeout: 268 seconds]
<alyssa>
So just drop next_job_32 entirely and have a single 64-bit next_job field and avoid all the indirection :)
<tomeu>
maybe we should run 1 in 10 tests, to keep the run time low
<bbrezillon>
alyssa: ok
nerdboy has quit [Ping timeout: 250 seconds]
<bbrezillon>
alyssa: I guess I can keep the offset calculation in decode.c without keeping the next_job_32
<alyssa>
tomeu: How do other drivers cope?
<tomeu>
alyssa: by sharding
<tomeu>
we can do that as well once we get all the kevins online
<alyssa>
Ah :|
<tomeu>
but for now I think we could run 1 in 10 tests easily
<alyssa>
Yeah, let's do that if that's doable at all
<tomeu>
do you think that would be helpful atm?
<tomeu>
cool
<alyssa>
At least to get a sense of where we're at?
<tomeu>
there's some crashes we should fix
<alyssa>
Also I think some of the slowness is from failing
<tomeu>
we don't seem to be that far
<tomeu>
I expect above 90%
<tomeu>
by running 1 in 10, I expect to have decent coverage at a low cost
<tomeu>
as tests are grouped by functionality
abordado has joined #panfrost
<tomeu>
deqp-gles3 is 44k tests, but I think deqp-gles31 is fairly small
<tomeu>
maybe we can also run 1 in 2 of gles31 or so, not sure how useful that would be
<alyssa>
tomeu: GLES31 would not be useful right now, no
<alyssa>
GLES3 we're a lot closer to (I don't think there are any big things missing for GLES3, just a huge number of small things -- so basically where we were at for GLES2 in early 2019)
flacks has joined #panfrost
EmilKarlson has joined #panfrost
thefloweringash has joined #panfrost
stikonas has quit [Remote host closed the connection]
EmilKarlson has quit [Write error: Connection reset by peer]
TheCycoONE1 has quit [Remote host closed the connection]
flacks has quit [Write error: Connection reset by peer]
thefloweringash has quit [Remote host closed the connection]
stikonas has joined #panfrost
TheCycoONE1 has joined #panfrost
megi has joined #panfrost
abordado has quit [Remote host closed the connection]