<Marex>
anarsoul: hey, you mentioned 5.4.y is not good enough for testing Lima, right ?
<Marex>
seems like Xilinx totally botched ABI compatibility with their TFA/PMUFW, so I cannot get anything newer than 5.4 working, lovely
yuq825 has joined #lima
<Marex>
ah look, if I revert the broken xilinx patches from next, then I can boot next, good
<anarsoul>
Marex: you can always backport latest lima patches into 5.4
<Marex>
oooh, and the GALLIUM_HUD also works with next, unlike 5.4.y
<Marex>
anarsoul: I thought you mentioned there's some DRM scheduling problem in 5.4
<anarsoul>
oh, right
<anarsoul>
and drm_sched patches
<Marex>
anarsoul: do you know which drm sched patches are relevant ?
<anarsoul>
no, ask MoeIcenowy
<Marex>
anarsoul: I think it's drm/scheduler: rework entity creation
<anarsoul>
Marex: I haven't looked into it. Feel free to experiment :)
<Marex>
anarsoul: already done, I'll run dEQPs with next and then this patched 5.4.32
<anarsoul>
you'll likely get slightly different result since next has dynamic tile heap support while 5.4 uses fixed 1mb tile heap
<Marex>
anarsoul: I backported that too
<anarsoul>
good
<MoeIcenowy>
Marex: I backported "drm/sched: Fix passing zero to 'PTR_ERR' warning v2", "drm/scheduler: Avoid accessing freed bad job.", "drm/lima: use drm_sched_fault for error task handling"
<anarsoul>
MoeIcenowy: thanks
<Marex>
MoeIcenowy: I have all those backported too, thanks
<bshah>
but then 384MHz value is not there, so I am confused :)
<bshah>
anarsoul: do you have any ref for from where the value of 384MHz was taken?
<anarsoul>
IIRC 384 is default
<anarsoul>
there's no frequency table for A64, BSP runs it constantly at 432
<bshah>
hm
<bshah>
okay, well... I will at least test out suspend/resume part, sinc eI do occasionally hit gp error after resume in kwin
<anarsoul>
update mesa
<bshah>
should be roughly ~3 days old
<anarsoul>
OK
<anarsoul>
enunes: did you try enabling const and uniforms cloning for gpir? it should benefit from it as well
<bshah>
so this patch seems to work fine
<anarsoul>
no more gp errors?
<bshah>
nope, kwin no longer crashes... but I will give it bit more testing of 1 day before sending "Tested-by"
<bshah>
(mostly becaue this was not consistent error)
<bshah>
it always used to happen like 1/10 times
<anarsoul>
yuq825: ^^
<anarsoul>
bshah: suspend/resume 100 times in a row? :)
<bshah>
xD I can script it probably with rtcwake or something xD
_whitelogger has joined #lima
<Marex>
anarsoul: how many dEQP errors should I expect ?
<Marex>
anarsoul: on dEQP-GLES2.functional.* that is
<anarsoul>
Marex: see mesa/.gitlab-ci/deqp-lima-fails.txt
<anarsoul>
and deqp-lima-skips.txt
<Marex>
so around 60 ?
warpme_ has quit [Quit: Connection closed for inactivity]
buzzmarshall has quit [Remote host closed the connection]
<anarsoul>
likely more since there're flakes in skips
<anarsoul>
should be under 100 though
<Marex>
I had 500 on next/master + mesa/master, so I suspect I am doing something wrong
<Marex>
I'll keep digging
<anarsoul>
Marex: skip skips :)
<anarsoul>
once you hit GPU error context is considered tainted and jobs from this context are not executed
<anarsoul>
so if some tests hits gpu error all the following tests will fail if deqp reuses context
Barada has joined #lima
<bshah>
some more manual testing and seems completely stable, no gpu crashes on resume, I will find patch on ML and send Tested-by
<anarsoul>
as far as I remember it's not on ML
<yuq825>
sorry, I haven't send it to ML as I don't have a way to test it
<anarsoul>
apparently you've got it tested :)
<yuq825>
yeah, I'll send it out latter
<bshah>
Feel free to add "Tested-by: Bhushan Shah <bshah at kde.org>"
<yuq825>
thanks
dddddd has joined #lima
machinehum has quit [Ping timeout: 246 seconds]
<kittehuwu>
anarsoul: in glxinfo it doesnt even list half of the FBO extensions
warpme_ has joined #lima
<enunes>
anarsoul: I didn't try cloning in gp, I'll give it a try
<enunes>
I had another small set of small gp changes from a long time ago, mostly moving more stuff to nir passes, but that changed behavior of some tests that started hitting the gpir unknown bugs... so i preferred to not change it at all
<enunes>
if it's more stable now, I can also try that again
Barada has quit [Quit: Barada]
buzzmarshall has joined #lima
yuq825 has quit [Quit: Leaving.]
<bshah>
agh
<bshah>
just when yuq sent the patchset for suspend resume, my lima crashed :S
<bshah>
[ 2414.378652] lima 1c40000.gpu: gp error irq state=4 status=2b
<bshah>
[ 2414.387932] lima 1c40000.gpu: fail to save task state: error task list is full
<bshah>
[ 2414.387951] lima 1c40000.gpu: gp task error int_state=0 status=aa
<bshah>
but this looks like potentially different error than what I used to get?
<kittehuwu>
yeah i just, cant get any fbo working on my phone