ChanServ changed the topic of #lima to: Development channel for open source lima driver for ARM Mali4** GPUs - Kernel has landed in mainline, userspace driver is part of mesa - Logs at https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=lima and https://freenode.irclog.whitequark.org/lima - Contact ARM for binary driver support!
_whitelogger has joined #lima
jrmuizel has joined #lima
jrmuizel has quit [Ping timeout: 248 seconds]
piggz has joined #lima
piggz has quit [Ping timeout: 244 seconds]
drod has quit [Remote host closed the connection]
jrmuizel has joined #lima
jrmuizel has quit [Remote host closed the connection]
dddddd has quit [Remote host closed the connection]
chewitt has quit [Quit: Adios!]
_whitelogger has joined #lima
<bshah> Hi there, so I am trying lima git master with the patches from : https://gitlab.freedesktop.org/icenowy/mesa/commits/lima-icenowy-exp and while git master as-is works, with this 3 patches applied, the plasmashell hangs, and while it is doing that if LIMA_DEBUG=all is enabled, it constantly spams :https://invent.kde.org/snippets/347 and at one point just OOMs
<bshah> cwabbott: MoeIcenowy mentioned that it is worth sending you this log ^^
<bshah> https://bshah.in/lima.dump is lima.dump with export LIMA_DEBUG=all
<bshah> Reason I was trying that branch is, I was having "gpir: if nir_cf_node not support" errors, and it was suggested that flattening ifs by hacks in teh branch could help.. https://bshah.in/lima-log-virtual-keyboard.txt is log with LIMA_DEBUG=gpir for that failure.
mzki has quit [Ping timeout: 272 seconds]
mzki has joined #lima
<bshah> I also tried : https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1472 but I get same oom
yuq825 has joined #lima
cwabbott has quit [Ping timeout: 244 seconds]
_whitelogger has joined #lima
paulk-leonov-spa has quit [Quit: Leaving]
paulk-leonov has joined #lima
drod has joined #lima
drod has quit [Ping timeout: 248 seconds]
drod has joined #lima
_whitelogger has joined #lima
drod has quit [Ping timeout: 268 seconds]
dddddd has joined #lima
drod has joined #lima
drod has quit [Ping timeout: 272 seconds]
jrmuizel has joined #lima
dri-logg1r has quit [Ping timeout: 246 seconds]
drod has joined #lima
dri-logger has joined #lima
yuq825 has quit [Remote host closed the connection]
hoijui has joined #lima
jrmuizel has quit [Remote host closed the connection]
jrmuizel has joined #lima
jrmuizel has quit [Remote host closed the connection]
jrmuizel has joined #lima
drod has quit [Remote host closed the connection]
jrmuizel has quit [Remote host closed the connection]
jrmuizel has joined #lima
jrmuizel has quit [Remote host closed the connection]
hoijui has quit [Quit: Leaving]
drod has joined #lima
<anarsoul> lima offline compiler doesn't work for fragment shaders :(
<anarsoul> fails with "lima_compiler: ../src/compiler/nir/nir.c:55: nir_shader_create: Assertion `si->stage == stage' failed."
jbrown has quit [Ping timeout: 250 seconds]
jbrown has joined #lima
<anarsoul> enunes: so far I like your improvement to make scheduler pipeline reg aware
<enunes> anarsoul: cool, I am just doing a last round of tests before pushing the tex projection patch again... then after that I can take the scheduler patch out of WIP
<anarsoul> cool
<anarsoul> I reworked const lowering and looks like there's no obvious regressions atm
<anarsoul> (I already fixed several :))
<anarsoul> I'd say const lowering and scheduling is now a lot simpler
<enunes> anarsoul: the only think I still wonder is about also duplicating the load uniforms like we did before
<enunes> it could still be done in lowering
<enunes> maybe it is better to keep duplicating them
<anarsoul> you mean varyings?
<anarsoul> why do we need to duplicate uniforms?
<enunes> not duplicate uniforms, just the load uniforms
<anarsoul> yet, as long as it's loaded into reg it should be fine?
<enunes> because usually they are used in many places, not duplicating them force a register to be kept for them and causes a lot of spilling
<anarsoul> I guess we can check whether it has a single successor and use pipeline reg in this case, otherwise just load it into a regular reg?
<enunes> it's more about not increasing register pressure and avoiding spilling
<anarsoul> enunes: yet loading uniform is memory access and it's probably slower than using a reg
<enunes> yes, but spilling is probably worse
<anarsoul> let's keep it simple and get it functional complete first
<anarsoul> and then add optimizations
<enunes> yeah I mean, the duplicating would also be to insert a full load->mov instruction for each, not really an optimization
<enunes> it's that with my change, basically each uniform basically takes a way a register
<anarsoul> I'm not sure if it'll require walking successors/predecessors lists
<anarsoul> I'm trying to get away from it atm
<enunes> ok, we can also consider that
<enunes> if it makes your work easier and we can rethink that later
<enunes> I'll still try to get some data, I expect to see thousands of additional spills in a piglit run
<anarsoul> enunes: idea is to build successors/predecessors lists after lowering is done and no additional nodes will be created
<anarsoul> so technically all you've got in lowering is list of ppir_node in block and each ppir_src has a pointer to corresponding ppir_node (registers are exception here)
<anarsoul> it's still possible to walk the tree upwards
<enunes> how do we handle the registers (non-ssa)?
<anarsoul> enunes: we'll add additional dependencies for read after write
Elpaulo has joined #lima
<anarsoul> well, write after read actually
<anarsoul> enunes: so idea is to lower everything and then build successors/predecessors lists
<enunes> I see, overall sounds better than the deps scattered around
Elpaulo has quit [Ping timeout: 246 seconds]
Elpaulo has joined #lima
Elpaulo has quit [Ping timeout: 248 seconds]
Elpaulo has joined #lima
Elpaulo has quit [Ping timeout: 272 seconds]
Elpaulo has joined #lima
<anarsoul> enunes: can you share your script to run piglit?
<enunes> looking at it now, maybe I should revisit that exclusion list for the specific tests
<anarsoul> thanks
<anarsoul> how long does it take for you to run it?
<anarsoul> it's been running for more than 40mins for me, and it's now at 2574/5507
<enunes> half an hour on pine64
<enunes> on nfs, with not debug-optimized binaries
<enunes> with debug-optimized*
<enunes> last couple of runs have something like time_elapsed 0:36:13.604168
<anarsoul> I see