ChanServ changed the topic of #lima to: Development channel for open source lima driver for ARM Mali4** GPUs - Kernel has landed in mainline, userspace driver is part of mesa - Logs at https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=lima and https://freenode.irclog.whitequark.org/lima - Contact ARM for binary driver support!
_whitelogger has joined #lima
buzzmarshall has quit [Remote host closed the connection]
Elpaulo has quit [Quit: Elpaulo]
Elpaulo has joined #lima
arti has quit [Ping timeout: 240 seconds]
arti has joined #lima
yann has joined #lima
kaspter has quit [Remote host closed the connection]
ddevault has quit [Quit: Why do I even put this quit message in if I never quit]
lvrp16 has quit [Ping timeout: 246 seconds]
lvrp16 has joined #lima
kaspter has joined #lima
cwabbott has quit [Ping timeout: 244 seconds]
ddevault has joined #lima
cwabbott has joined #lima
cwabbott has quit [Ping timeout: 240 seconds]
cwabbott has joined #lima
cwabbott has quit [Ping timeout: 244 seconds]
cwabbott has joined #lima
cwabbott has quit [Read error: Connection reset by peer]
cwabbott has joined #lima
cwabbott has quit [Read error: Connection reset by peer]
cwabbott has joined #lima
buzzmarshall has joined #lima
buzzmarshall has quit [Client Quit]
buzzmarshall has joined #lima
cwabbott has quit [Ping timeout: 244 seconds]
cwabbott has joined #lima
kaspter has quit [Quit: kaspter]
warpme_ has joined #lima
hexdump0815 has joined #lima
<hexdump0815> i'm from time to time trying to run my arm builds of vcvrack (www.vcvrack.com - my arm builds: https://github.com/hexdump0815/vcvrack-dockerbuild-v1/releases/tag/v1.1.6_5) on lima systems (amlogic s905w or odroid u3) and meanwhile it gets quite far, but hangs at some point still
<hexdump0815> it works well with the mali blob and mainline ported driver and it works well with panfrost on a rk3399 system
<hexdump0815> i have now written down the details as a gitlab issue at: https://gitlab.freedesktop.org/mesa/mesa/-/issues/3467
<hexdump0815> please let me know if you need any more details ... i tried to create an apitrace but so far it did not work too well (i'm getting some warnings from apitrace and an apitrace replay of the trace file does not seem to work)
<hexdump0815> i saw the same error on aarch64 mali450 (amlogic s905w tv box) and on armv7l mali400 (odroid u3) and tried it up to linux 5.9.0 kernel and 20.1.6 mesa so far
<hexdump0815> sorry linux 5.8.0 kernel i ment
<hexdump0815> here are the apitrace warnings i get - maybe they are harmless? - https://pastebin.com/raw/94dgnccF
<hexdump0815> ok - i think i finally got a useable trace which at least i was able to replay to trigger the error again ... i updated the above gitlab issue and attached the apitrace
hexdump0815 has quit [Remote host closed the connection]
<anarsoul|2> cool, thanks
<anarsoul|2> lima needs CMA for buffers with SCANOUT flag
<anarsoul|2> technically it's not lima, it's kmsro
<enunes> anarsoul|2: I found out today that running deqp with the surfaceless egl backend is what gives me the ~79% pass rate
<enunes> running with windowless x11 backend makes a few more pass
<enunes> and running with x11 egl backend actually gives Passed: 15799/16317 (96.8%)
<anarsoul|2> you're probably using wrong visual with surfaceless
<anarsoul|2> make sure that you have depth specified, otherwise all the tests that need depth will fail
<enunes> its actually the same command line which I copied from the CI scripts, I just dont export EGL_PLATFORM=surfaceless anymore
<anarsoul|2> can you share your cmdline?
<enunes> deqp-gles2 --deqp-surface-width=256 --deqp-surface-height=256 --deqp-visibility=hidden --deqp-log-images=disable --deqp-crashhandler=enable --deqp-surface-type=pbuffer --deqp-caselist-file=case-list.txt
<anarsoul|2> yeah, you need to specify visual
<anarsoul|2> try adding --deqp-gl-config-name=rgba8888d24s8ms0
<enunes> ok, that would make sense, would be good to keep running the surfaceless one
<anarsoul|2> I'm 99% sure that it was running without a depth buffer :)
<anarsoul|2> I had the same issue ~year ago
<enunes> I'm trying it now, would be pretty great if this is the issue
<enunes> yep, looks like this is it, thanks for the tip
<anarsoul|2> np
<enunes> 66 fails (without the patches from today), this is far more encouraging
warpme_ has quit [Quit: Connection closed for inactivity]
<anarsoul|2> yeah, some of them are pretty challenging