<narmstrong>
tomeu: can you precise which CI/CD variables you did set, and what I should set ? CI_REGISTRY_IMAGE ? CI_PROJECT_DIR ? is this a shared dir on the runner that is accessible by lava ?
<narmstrong>
CI_PROJECT_URL ? CI_JOB_ID ?
<narmstrong>
or this is the gitlab-ci artifacts storage ?
<narmstrong>
forget these question, seems handled by gitlab ci...
<narmstrong>
seems I have a buildah `Error during unshare(CLONE_NEWUSER): Operation not permitted`issue on my runner
<HdkR>
Oh neat. I didn't know that linux user namespaces allowed that
<narmstrong>
tomeu: ok fixed, thanks for your runner config :-p
adjtm has joined #panfrost
bbrezillon has joined #panfrost
<bbrezillon>
alyssa: I was wondering how the GPU knows that the last tiler job is linked to the fragment one
<alyssa>
bbrezillon: What do you mean?
<bbrezillon>
I don't see an explicit link between those 2 elements, but maybe I missed it
<alyssa>
the VERTEX/TILER and FRAGMENT jobs are in different job chains
<alyssa>
The dependency between them isn't handled in the GPU, that's between userspace and kernelspace to handle
<alyssa>
In fact in the early days of Panfrost I had a bug, where the symptom was random tiles being the clear colour instead of the geometry
<alyssa>
(In fact, that was a race condition between the VERTEX/TILER and FRAGMENT chains, solved by submitting with a dependency between them in kbase. I don't know how this is handled in the DRM driver.)
<bbrezillon>
we have dependencies
<bbrezillon>
by the way of in/out syncs
<alyssa>
That's how, then :)
<bbrezillon>
but what happens if you have 2 independent job chain submitted in //
<bbrezillon>
something like vertex_tiler1, vertex_tiler2, fragment1, fragment2
<alyssa>
bbrezillon: You tell me! :)
<bbrezillon>
both targeting different FBOs
* alyssa
doesn't believe that case occurs on master, since there's no pipelining
<bbrezillon>
no, but that's my point
<bbrezillon>
now that I support pipelining, I need to handle that case properly
<bbrezillon>
:)
<bbrezillon>
and what about multi-ctx?
<alyssa>
bbrezillon: My point is just that the GPU doesn't handle it
<alyssa>
It's up to either the kernel or userspace
<alyssa>
In kbase, I would express this with two job chains:
<alyssa>
(vt1, frag1); (vt2, frag2)
<alyssa>
with each frag having a kbase-dependency on vt
<alyssa>
(that's a software requirement -- unrelated to the hardware dependencies, i.e. scoreboarding -- implemented in kbase)
<alyssa>
In the DRM driver, I guess you'll want something analogous with syncs?
warpme_ has quit [Quit: warpme_]
<bbrezillon>
I think I have that already
<bbrezillon>
in_sync of the fragment is out_sync of the vt
<alyssa>
Then you should be good to go? What's the bug (symptom)?
<alyssa>
Also, I haven't thought about multictx
<bbrezillon>
well, runnin the deqp test suite shows weird output on the screen compared to the non-pipelined version
<alyssa>
Could you end the testlog.xml?
<bbrezillon>
(where each submit had a dependency on the previous one)
<alyssa>
(Upload it somewhere with the corresponding style sheets.)
<bbrezillon>
it hangs in the middle of the testsuite
<alyssa>
Ummm
<alyssa>
Could there be a kernel bug here?
<bbrezillon>
I'll try to run deqp-gles2 instead of deqp-vold
<bbrezillon>
in the vt1,frag1 + vt2,frag2 example, can vt1 and vt2 run in //
<bbrezillon>
(assuming they don't use the same resources)