ChanServ changed the topic of #lima to: Development channel for open source lima driver for ARM Mali4** GPUs - Kernel has landed in mainline, userspace driver is part of mesa - Logs at https://people.freedesktop.org/~cbrill/dri-log/index.php?channel=lima and https://freenode.irclog.whitequark.org/lima - Contact ARM for binary driver support!
megi has quit [Quit: WeeChat 2.7]
megi has joined #lima
buzzmarshall has quit [Quit: Leaving]
megi has quit [Ping timeout: 260 seconds]
Barada has joined #lima
Barada has quit [Ping timeout: 265 seconds]
Barada has joined #lima
Barada has quit [Ping timeout: 240 seconds]
Barada has joined #lima
chewitt has joined #lima
<anarsoul> I want to grep dumps for AUX0 to figure out when it sets bit 12 which is supposed to be "pixel kill" according to https://web.archive.org/web/20171026123213/http://limadriver.org/Render_State/
Barada has quit [Quit: Barada]
<rellla> anarsoul: it should, probably not with the most recent parser
<anarsoul> that's fine
<anarsoul> how large is it when unpacked?
<rellla> iirc ~38GB
<anarsoul> ouch
<anarsoul> you should have packed it into zip :)
<anarsoul> I've got only ~20gb free
<rellla> i can do a grep for you
<anarsoul> I'm not really sure what to look besides AUX0 so I guess I'll use my external drive
<anarsoul> basically I want to figure out when blob enables this "pixel kill"
<anarsoul> it should be some kind of optimization
<rellla> i'm also not sure, if every rsw is rendered im there already. it should be the tar of the single dirs online
<anarsoul|c> Anyway it was a great idea to make these dumps, thanks again
Barada has joined #lima
dddddd has quit [Ping timeout: 260 seconds]
yuq825 has joined #lima
_whitelogger has joined #lima
_whitelogger has joined #lima
deesix_ has joined #lima
deesix has quit [Ping timeout: 265 seconds]
deesix_ is now known as deesix
Barada has quit [Quit: Barada]
dddddd has joined #lima
<rellla> anarsoul: http://imkreisrum.de/deqp/deqp-complete-dumps_mali400-r6p2_on-allwinner-a20/AUX0.grep.tar.bz2 is a simple 'grep "AUX0" -r .' which is 95MB in size unpacked and which you may grep again and use a regex on it :)
Barada has joined #lima
megi has joined #lima
<enunes> anarsoul: so after staring at the liveness problem a bit more, I'm realizing that the algorithm should actually still work for multiple writes in different blocks
<enunes> the point is that when a register is written, say all components, the right thing to do is indeed kill it
<enunes> the values that were previously there are not valid anymore, so the register is indeed not live
<enunes> if only some components are overwritten, the register stays in the live set due to the mask tracking
Barada has quit [Quit: Barada]
<enunes> so, if there are regressions in the ubuntu touch stuff because of that, it's either some missing corner case or just another exposed bug
<enunes> I also ran deqp with loop unrolling disabled and there were no new issues with loops, multiple writes etc, so I think the implementation is ok
yuq825 has quit [Remote host closed the connection]
<anarsoul|c> enunes: I've sent an MR on Saturday that fixes the issue for Ubuntu touch
<enunes> anarsoul: well, I think we're good then, I'm confident now that there is no major design issue
<anarsoul|c> Not with liveness analysis :)
<anarsoul|c> See my MR that fixes texture lowering
<anarsoul|c> I don't really like the solution but I don't see other way to fix it without refactoring ppir
<anarsoul|c> Basically we have to track all successors, not only these from current block
<anarsoul|c> Otherwise we don't have enough information to insert move correctly for ldtex
<enunes> I saw that one but didn't review in depth
<enunes> is it something that was exposed with liveness or was it always there?
<enunes> I have to refresh my mind about how texture lowering works
<anarsoul|c> It was here since I dropped cloning ldtex to each block
<anarsoul|c> Since we can't guarantee that ldtex will be called from each invocation if we clone them
<enunes> I'll only be able to look at it later today; does this have any relation with the higher precision path for textures?
<MoeIcenowy> anarsoul|c: I checked AUX0 pixel kill bit set in the dump
<MoeIcenowy> many tests have it set
<anarsoul|c> @enunes: nope, that's for ldtex, since it always puts result into pipeline reg
<anarsoul|c> We need a mov if we want to store into into real reg
<rellla> MoeIcenowy: yeah, most of the test have it set - except the depth_stencil.stencil_ops.*
<anarsoul> I don't really see the pattern :(
<anarsoul> if I set it unconditionally glmark2 breaks
<MoeIcenowy> anarsoul: I think we should set it if no discard and no blend
<anarsoul> MoeIcenowy: maybe
<rellla> a quick look at AUX0 without regexing the dump: https://pastebin.com/raw/xq90DYNZ
<rellla> ah we have 0x80 already
<rellla> but 0x3000 is unknown!?
<anarsoul> rellla: 0x80 is set when we have textures
<anarsoul> 0x2000 is set when we have textures
<rellla> anarsoul, can you find out from logs, which tests caused the GP errors?
<anarsoul> nope :(
<anarsoul> rellla: doesn't really matter since they're supposed to be fixed by Qiang's tile heap MR
<rellla> i have the same in my dmesg log but i can't say if they are from an older run, or from the one i triggered this morning...
<anarsoul> well, once lava ci updates their kernel :)
<rellla> and in that one, heap patches were included...
<anarsoul> is it also state=4?
<rellla> ah, no.
<rellla> and the more i remember they are from the skipped tests, which i tested again a few days ago.
<rellla> so ... sorry for the noise :p
<anarsoul> :)
_whitelogger has joined #lima
xdarklight has quit [Ping timeout: 268 seconds]
Elpaulo1 has joined #lima
ecloud has quit [Read error: Connection reset by peer]
xdarklight has joined #lima
ecloud has joined #lima
deesix has quit [Ping timeout: 268 seconds]
Elpaulo has quit [Ping timeout: 268 seconds]
Elpaulo1 is now known as Elpaulo
Kwiboo has quit [Ping timeout: 268 seconds]
deesix has joined #lima
buzzmarshall has joined #lima
zombah has quit [Ping timeout: 268 seconds]
zombah has joined #lima
Elpaulo has quit [Read error: Connection reset by peer]