yuq825 has joined #lima
dddddd has quit [Remote host closed the connection]
<
anarsoul>
yuq825: hi, looks like rellla found misrendering issue with growing heap enabled
<
yuq825>
ok, i see, then we may consider disable growheap by default for now
<
anarsoul>
yeah, probably, turn 'nogrowheap' flag into 'growheap'?
<
anarsoul>
I guess we're missing cache flush or tlb flush somewhere
yuq825 has quit [Quit: Leaving.]
_whitelogger has joined #lima
megi has quit [Ping timeout: 272 seconds]
Barada has joined #lima
yuq825 has joined #lima
yuq825 has quit [Remote host closed the connection]
buzzmarshall has quit [Remote host closed the connection]
<
rellla>
disabling growheap exposes other issues though like kernel errors again
Barada has quit [Quit: Barada]
Barada has joined #lima
Barada has quit [Quit: Barada]
yuq825 has joined #lima
<
rellla>
to confirm it, disabling growheap makes dEQP-GLES2.functional.color_clear.* pass - with out kernel errors.
<
rellla>
with growheap 3 tests fail.
<
rellla>
and they fail in a flaky way. depending on which test subset is executed
<
rellla>
long_masked_rgb for example succeeds when executed within the complete color_clear.*, but fails when run as single test
<
rellla>
indeed smells like some missing flush or sth related
<
anarsoul>
rellla: if you run tests in batches heap likely grows in previous test
<
anarsoul>
so subsequent test passes (it already has large enough heap)
<
anarsoul>
rellla: I checked kernel driver and it does l2 cache flush and tlb invalidation
<
anarsoul>
yuq825: how do we make sure that heap bo doesn't overlap with other BOs?
<
yuq825>
in GPU VM space?
z3ntu_ has joined #lima
<
yuq825>
we give it a max 16M size, VM will reserve 16M space for it from the beginning
<
yuq825>
then we append real mem when fail
<
anarsoul>
from what I can tell lima does exactly the same as mali kernel driver
<
yuq825>
how about add another flush after vm updated and before resume?
<
yuq825>
now we just flush after fail and before vm update
<
anarsoul>
but blob doesn't do that?
<
anarsoul>
yuq825: I guess we can try allocating heap_size + 1 page, while specifying exactly heap size to the GPU
<
anarsoul>
i.e. GPU VM will have extra page mapped
<
anarsoul>
maybe there's GPU bug and 1 write may be lost if page is not here
<
anarsoul>
I'll try implementing that tomorrow
mariogrip has joined #lima
Net147 has quit [Quit: Quit]
monstr has joined #lima
yuq825 has quit [Remote host closed the connection]
megi has joined #lima
Barada has joined #lima
dddddd has joined #lima
Elpaulo has quit [Quit: Elpaulo]
Barada has quit [Quit: Barada]
yann has quit [Ping timeout: 265 seconds]
dddddd has quit [Ping timeout: 260 seconds]
monstr has quit [Remote host closed the connection]
megi has quit [Ping timeout: 260 seconds]
buzzmarshall has joined #lima
megi has joined #lima
dddddd has joined #lima