alyssa changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - https://gitlab.freedesktop.org/panfrost - Logs https://freenode.irclog.whitequark.org/panfrost - <daniels> avoiding X is a huge feature
stikonas has quit [Remote host closed the connection]
_whitelogger has joined #panfrost
gcl has quit [Ping timeout: 258 seconds]
vstehle has quit [Ping timeout: 244 seconds]
gcl has joined #panfrost
jeez_ has joined #panfrost
jeez_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
MoeIcenowy has quit [Quit: ZNC 1.6.5+deb1+deb9u1 - http://znc.in]
MoeIcenowy has joined #panfrost
<tomeu> alyssa: is the power consumption delta between weston with pixman and weston with panfrost, while idle?
<tomeu> alyssa: getting the kernel in time was definitely the priority, but now that it's in the rearview mirror, it seems smaller :p
<alyssa> tomeu: So does everything ;)
<alyssa> tomeu: My power measurements were really vague. But it's still something
<tomeu> was kind of thinking yesterday that weston for 19.1 and gnome-shell for 19.2 would have been cool, but we don't do stuff just because of being cool, right? :)
<tomeu> alyssa: how do you measure?
<tomeu> onc hromebooks, we should be able to ask the EC
<alyssa> tomeu: Was looking at random /sys entries trying to mimic upower?
<tomeu> was planning to do that eventually, maybe from igt
<alyssa> I dunno
<alyssa> Best would be to try on a SBC and physically measure the current draw
<tomeu> ezequielg: do you know if we can get from userspace the power draw stats that are in the EC serial console?
<alyssa> But shrug
<tomeu> yeah, but I think that should be under CI, otherwise we'll regress all the time
<tomeu> but TBH, upstream is quite behind the various vendor downstreams on that regard
<tomeu> it's something I've been working to change, but several pieces aren't yet in place
<alyssa> tomeu: That's what they said about graphics! =P
<tomeu> and we are almost done now :D
<alyssa> Ehhhj
<tomeu> if we could get those from the EC in the chromebooks we already have in our lava, we could get something useful relatively quickly
<tomeu> but atm kernelci doesn't do automated bisection of test suite regressions, so we need to do that first before extending coverage
* alyssa shrug
vstehle has joined #panfrost
MoeIcenowy has quit [Quit: ZNC 1.6.5+deb1+deb9u1 - http://znc.in]
MoeIcenowy has joined #panfrost
pH5 has quit [Quit: bye]
_whitelogger has joined #panfrost
pH5 has joined #panfrost
chewitt has joined #panfrost
raster has joined #panfrost
MoeIcenowy has quit [Quit: ZNC 1.6.5+deb1+deb9u1 - http://znc.in]
MoeIcenowy has joined #panfrost
rhyskidd has quit [Ping timeout: 258 seconds]
rhyskidd has joined #panfrost
raster has quit [Remote host closed the connection]
raster has joined #panfrost
afaerber has joined #panfrost
<tomeu> alyssa: btw, don't you already have a patch for y flipping the sampling of the fb texture?
<tomeu> robher: do you have any plans regarding GROW_ON_GPF?
<robher> tomeu: I was going to do per fd address space first, but can do either.
<tomeu> hmm
<tomeu> my interest in growable is due to excessive memory usage adding instability to CI on the veyron jaq
<tomeu> but per fd address space is also very important
<tomeu> without growable, I'm not sure we can run the EGL deqp test suite, as it also allocates a lot of memory in its own
<robher> with growable, how do things shrink?
<tomeu> don't know if they do
<cwabbott> on the downstream kernel, they don't
<tomeu> hmm, what's the std shrinker being used for in kbase then?
<cwabbott> no idea
<cwabbott> but iirc there's no way to tell the kernel to shrink a GROW_ON_GPF allocation
<cwabbott> and it wouldn't be a good idea unless you were certain the app wouldn't use that amount of tile heap again
<tomeu> maybe it's for JIT only?
<cwabbott> no, JIT is something completely different to do with vulkan
<cwabbott> GROW_ON_GPF has to do with the tiler heap, which is what tiler jobs append to and the fragment job reads from
<cwabbott> basically it's a list of triangles for each bin, together with a pointer to the fragment shader needed to render it
<cwabbott> *for each tile
<tomeu> and the kernel never knows when the GPU is done with a page in the tiler heap?
<cwabbott> well, it's done when the fragment job reading from it is done
<tomeu> the shrinker seems to be used only in low memory conditions
<tomeu> guess in general it wouldn't be a good idea to reduce the size of the tiler heap, but it would be better than reaching OOM
<cwabbott> but apps tend not to use a ton of triangles for only a little bit, so it's a good idea to reuse the already-allocated buffer for the next frame
<cwabbott> with the assumption that if the app needed a lot of space before, it'll probably need it again
<tomeu> sure
<cwabbott> although I guess you're right, the kernel wouldn't know what pages the GPU currently is using
raster has quit [Remote host closed the connection]
<tomeu> cwabbott: well, it could know that no jobs are running, right?
<tomeu> or could the GPU store stuff in there to be used across jobs?
shadeslayer has joined #panfrost
<shadeslayer> Hey!
<cwabbott> tomeu: theoretically yes, although in practice I don't think we'd do that
<tomeu> guess it's quite advanced stuff
<cwabbott> the tile buffer is used across jobs, but not across job chains
<tomeu> cool
<tomeu> in any case, I think in principle userspace should be told quite in advance that memory is starting to get tight and shrink caches
<shadeslayer> ah wait, I see, it's about the rzalloc call
<tomeu> shadeslayer: we have a lot of those leak TODOs, I would think they are all still valid
chewitt has quit [Quit: Zzz..]
<shadeslayer> tomeu: yes, I see why now :)
<tomeu> would be cool to get them fixed :)
<shadeslayer> tomeu: yeah, seems trivial to fix? just need to adjust the arguments to pass the context instead of null
<shadeslayer> at least that's what the v3d driver seems to do
<tomeu> hmm, I guess if we do that, then we'll be assured that the job will be destroyed when the context is destroyed
<tomeu> but we would like it to be released before, once the job struct isn't needed any more
chewitt has joined #panfrost
<shadeslayer> tomeu: wouldn't that be accomplished by calling panfrost_free_job, however, the way I understood it, calling rzalloc with the context parents the job to the context, ensuring it gets deleted on context deletion
<alyssa> tomeu: Ah, with FBOs, we render right-side-up, so no flipping needed when sampling
<tomeu> shadeslayer: yeah, I think what needs to happen is that panfrost_free_job is called once the panfrost_job isn't needed any more
<tomeu> shadeslayer: that it gets released on context destruction is nice, but not enough
<shadeslayer> tomeu: right, it needs both cases handled
<tomeu> alyssa: ah, you mean that you tested only with FBOs and not with scanout fbs?
<tomeu> shadeslayer: I think so
<shadeslayer> I need to figure out when the panfrost_job isn't needed anymore now :)
<tomeu> yeah, that's the hard part :)
<alyssa> tomeu: Correct. Under normal circumstances, you.. don't sample from scanout, that doesn't make sense from OpenGL :)
<tomeu> shadeslayer: btw, valgrind works well here
<shadeslayer> tomeu: so, panfrost_free_job gets called correctly everywhere?
<tomeu> shadeslayer: wouldn't expect so, I haven't checked
<shadeslayer> tomeu: oh, you meant test it with valgrind?
<shadeslayer> because I mis understood that as, valgrind reported no errors about this
<alyssa> shadeslayer: Theoretically, if panfrost_free_job is called reliably, the rzalloc doesn't leak and the comment is obsolete..
<shadeslayer> alyssa: right, but still useful to parent it to the context just in case?
<shadeslayer> but I'll take a look at where panfrost_free_job is meant to be called
<shadeslayer> since I don't have access to a device right now
<alyssa> Probably, yeah. I think I tried that and something didn't work right and I threw in the towel :P
<shadeslayer> hehe
<shadeslayer> alyssa: for now I'm just going to have a look at what v3d does, see if panfrost at least does the same stuff
<alyssa> It ought to..?
chewitt has quit [Quit: Zzz..]
<shadeslayer> alyssa: yeah, seems like it, so http://sprunge.us/wlEauf?diff is enough?
<alyssa> I would hope, but I recall trying that and having issues. I could look in a bit..?
* alyssa just woke up
<shadeslayer> ok :)
<tomeu> shadeslayer: well, if you run valgrind with leak detection enabled, you will see we leak a lot :)
<shadeslayer> :)
<tomeu> so one could check that we stop leaking after fixing stuff
<shadeslayer> tomeu: I can't really run that at the moment since I don't have a device
<tomeu> but if you free it too soon, then you will probably see a use-after-free reported by valgrind
<tomeu> shadeslayer: we need to fix that :)
<shadeslayer> right
<tomeu> guess the context could be destroyed while a job is still running on the GPU?
<shadeslayer> yeah, I need to look at that case
<tomeu> yeah, so I guess looking at drivers such as freedreno and v3d could be a good idea, but maybe there's something specific to panfrost related to temp buffers, etc
<tomeu> alyssa: regarding y-flip, guess in panfrost_emit_for_draw we need to check if we are going to sample from a scanout buffer, and if so, set a negative stride and point to the end of the buffer?
<tomeu> shadeslayer: btw, I'm very glad you are looking at the leaks, they are a bit of a problem for the CI
<shadeslayer> tomeu: don't thank me yet :P
<tomeu> alyssa: swizzled_bitmaps has quite some magic in it :)
<alyssa> tomeu: Quite a bit... but without more information I can't demagick it really ....
<alyssa> tomeu: Er, no, backwards
<alyssa> When we emit a texture descriptor, if it's sampling from a scanout buffer, it should have a negative stride
<alyssa> So in turn what you really want is to be honest and have the panfrost_bo have a negative stride (in the slice[0]) and it's start updating accoringly
<alyssa> I.e. pull the logic in line 100-107 of pan_mfbd.c out from that file (and _sfbd.c) and into resource_create when set for SCANOUT
<alyssa> (or DISPLAY_TARGET I guess)
jolan has quit [Quit: leaving]
pH5 has quit [Quit: bye]
pH5 has joined #panfrost
pH5 has quit [Remote host closed the connection]
pH5 has joined #panfrost
pH5 has quit [Remote host closed the connection]
pH5 has joined #panfrost
raster has joined #panfrost
afaerber has quit [Quit: Leaving]
jolan has joined #panfrost
herbmilleriw has quit [Remote host closed the connection]
danvet has joined #panfrost
stikonas has joined #panfrost
<alyssa> tomeu: Scrolling in epiphany corresponds to high CPU in epiphany, WebkitWeb.., and Weston... That's gotta hurt :-(
jeez_ has joined #panfrost
adjtm has quit [Ping timeout: 246 seconds]
adjtm has joined #panfrost
jeez_ has quit [Remote host closed the connection]
indy_ has joined #panfrost
BenG83 has joined #panfrost
stikonas_ has joined #panfrost
stikonas has quit [Ping timeout: 252 seconds]
afaerber has joined #panfrost
herbmillerjr has quit [Ping timeout: 246 seconds]
mateo` has quit [Ping timeout: 258 seconds]
herbmillerjr has joined #panfrost
danvet has quit [Ping timeout: 258 seconds]
herbmillerjr has quit [Client Quit]
herbmillerjr has joined #panfrost
mateo` has joined #panfrost
rhyskidd has quit [Read error: Connection reset by peer]
rhyskidd has joined #panfrost
herbmillerjr has quit [Remote host closed the connection]
herbmillerjr has joined #panfrost
rhyskidd has quit [Ping timeout: 258 seconds]
stikonas_ has quit [Remote host closed the connection]
raster has quit [Remote host closed the connection]
rhyskidd has joined #panfrost