alyssa changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - https://gitlab.freedesktop.org/panfrost - Logs https://freenode.irclog.whitequark.org/panfrost - <daniels> avoiding X is a huge feature
<alyssa_> Why does the entire kernel need to recompile gah
<alyssa_> Do I have ccache on this machine
<alyssa_> Guess I should try one of the faster linkers while I'm at it (does lld work with the kernel these days?)
shenghaoyang has joined #panfrost
stikonas has quit [Remote host closed the connection]
BenG83 has quit [Quit: Leaving]
<shenghaoyang> got the kms driver running on a C101PA (t860) with lots of DATA_INVALID_FAULT s :(
<alyssa_> shenghaoyang: Any chance you can give advice for the magic touch for getting mainline kernels to boot? :V
<shenghaoyang> alyssa_: I ripped the PKGBUILD from ArchLinuxARM for mainline linux and pointed it to the panfrost tree *major hack*
<alyssa_> shenghaoyang: Hey, me too! Still not booting
<alyssa_> Wow, this patch ought to work.. if only I had a way to boot it...
shenghaoyang has quit [Remote host closed the connection]
<alyssa_> In the mean time, I'll be working on AFBC userspace stuff so I'll be able to use the patch when it's ready
<HdkR> :D
<alyssa_> Should lead to a perf boost across the board
<HdkR> Which board?
<alyssa_> :P
<HdkR> :D
<HdkR> across the boards
<alyssa_> HdkR: Defining a static function in a header is fine right
<HdkR> sure
<alyssa_> robher: Playing with the DRM driver for the first time... I'm very impressed. Nice work! D:
<alyssa_> Er :D
<alyssa_> Trying to figure out why -brefract wouldn't work
<alyssa_> When -bshadow is ok
<alyssa_> Okay, huh, I have a patch that fixes -brefract with DRM driver but we still get flooded with MMU faults for some reason?
<alyssa_> (set surf to NULL if is_scanout)
tomeu has joined #panfrost
<tomeu> o/
shenghaoyang_ has joined #panfrost
<tomeu> robher: have come up with a simple clear job for igt: https://gitlab.freedesktop.org/tomeu/igt-gpu-tools/tree/panfrost
<tomeu> but it faults right away when trying to read the first byte of the job descriptor
<tomeu> I'm out of time this week to look at it, but thought I would push in case it could be useful to you
MoeIcenowy has quit [Quit: ZNC 1.6.5+deb1+deb9u1 - http://znc.in]
MoeIcenowy has joined #panfrost
shenghaoyang_ has quit [Remote host closed the connection]
shenghaoyang_ has joined #panfrost
shenghaoyang_ has quit [Remote host closed the connection]
shenghaoyang_ has joined #panfrost
shenghaoyang_ has quit [Remote host closed the connection]
memeka has left #panfrost [#panfrost]
rhyskidd has joined #panfrost
griffinp has quit [Quit: ZNC - http://znc.in]
rhyskidd has quit [Quit: rhyskidd]
afaerber has quit [Quit: Leaving]
afaerber has joined #panfrost
jernej has joined #panfrost
<robher> alyssa_, tomeu: what have I missed for todo list? https://www.irccloud.com/pastebin/bEY34VU5/
<robher> 9. Testing on other Midgard variants. Only T860 is tested.
<anarsoul|2> frequency scaling? (I'm not sure if it's done in software on midgard/bifrost)
<robher> anarsoul|2: yes. devfreq and thermal support.
<cwabbott> robher: you missed some equivalent for GROW_ON_GPF
<robher> cwabbott: #6
<cwabbott> that's different from just resizing the heap from userspace, since userspace has no idea how much memory will be required, the tiler might need more memory on-the-fly
<cwabbott> so for the purposes of the tiler heap, resizing from userspace won't suffice
BenG83 has joined #panfrost
pH5 has joined #panfrost
<robher> cwabbott: so tiler heap and userspace heap are 2 different things, but both are needed?
shenghaoyang_ has joined #panfrost
stikonas has joined #panfrost
shenghaoyang_ has quit [Remote host closed the connection]
afaerber has quit [Quit: Leaving]
<alyssa_> robher: Yup
<cwabbott> robher: if by userspace heap you just mean the part of the driver that allocates/reuses buffers, then yeah
<cwabbott> the tiler is responsible for taking in a list of triangles from the vertex pipeline and then returning for each bin/tile a list of triangles that may intersect it (and thus have to be rasterized for that tile)
<cwabbott> in order to do that, it builds up a datastructure in a chunk of memory handed to it by the driver called the tiler heap
afaerber has joined #panfrost
<cwabbott> the actual size of the datastructure is going to depend on the number of triangles (known beforehand, unless geometry/tess shaders are in the mix) and which tiles each triangle overlaps (unknown beforehand)
<cwabbott> so, the way things currently work, the userspace driver allocates the tiler heap as GROW_ON_GPF, reserving some initial space based on its guesstimate of how much it'll take, and then when the tiler tries to write to something that's not mapped the kernel will allocate a new page on demand
<robher> cwabbott: If you say driver, I hear kernel.
<cwabbott> robher: ah, I guess in the first sentence I meant the userspace driver
<robher> I think from the start we've been talking about the same thing.
<cwabbott> maybe?
<cwabbott> I hadn't heard about madvise() before, but I guess you could use it for MADV_WILLNEED to let userspace give its guesstimate for the size of the tiler heap
<cwabbott> I don't think the shrinker is really relevant here, since only the device itself knows when some memory isn't going to be used
<robher> cwabbott: Backing up. So currently, you can allocate a BO. You give it a size. It is an internal (to the kernel) implementation detail that we pin all of the memory at the start. Item 6 is to stop doing that and pin pages on faults. Then you can allocate 50G and the memory usage is whatever you touch.
<cwabbott> robher: right... although for most things you won't want/need that
<cwabbott> I don't even know if they support restarting after page faults for anything but the tiler
<robher> Now, maybe we'll want to hint to the kernel whether to pin all the pages or not.
<robher> Most drivers don't pin pages up front.
<cwabbott> do they? I was under the impression that restarting after a page fault is a relatively new HW feature that most don't support yet
<robher> They probably pin them on submit.
<cwabbott> yeah, right
<cwabbott> I guess that's kind of a separate question
<robher> If you don't have swap, then it doesn't help to not pin pages.
<cwabbott> right...
<cwabbott> and the cpu overhead for keeping track of everything kinda sucks
<cwabbott> I guess all I wanted to say is that the "only allocate on page fault" behavior is definitely needed for the tiler heap
<narmstrong> robher: sorry by your edid looks severely broken... and it seems the dw-hdmi i2c code doesn’t handle nacks nor timeouts :-/
stikonas has quit [Remote host closed the connection]
<robher> narmstrong: it wasn't broken in 5.0...
<robher> but is a no name panel...
<narmstrong> robher: yep because we didn’t handle scdc, broken edid is a complex issue
<robher> there aren't exactly any small (11") brand name panels.
<narmstrong> Adding a quirk is the only acceptable solution
<narmstrong> I’ll propose a solution on the list to trigger a discution on the issue
<narmstrong> robher: can you run an i2cdetect on the i2c used by the hdmi link ? To confirm the scdc slave address is really not present
<narmstrong> Damn the scdc slave address is present, this is really weird
<narmstrong> 0x54
<narmstrong> Thanks for checking, I’ll try to propose a fix or quirk tomorrow
<robher> narmstrong: thanks for digging into it.
<alyssa_> robher: List + the stuff mentioned in here is pretty accurate, I think
<alyssa_> To recap the GROW_ON_GPF stuff:
<alyssa_> Most buffers we allocate as normal BOs. It's an implementation detail when that gets pinned. Same as any other driver.
<alyssa_> A few "special" buffers are explicitly allocated as unpinned (except for the first N pages). That will expand _while the job is running_, in the middle, in response to a page fault. This can cause a stall, so it's used infrequently, but for GPu internal structures, it's needed.
Elpaulo has joined #panfrost
<robher> alyssa_: when do we unpin pages?
<alyssa_> robher: For normal buffers, when userspace calls free or whatever
<alyssa_> For special buffers, we don't.
<robher> alyssa_: so when we OOM, we do what? There's little point in pinning on demand if we never unpin.
* robher afk
<alyssa_> robher: No, there is a point, since most of the time, only a small fraction of the buffer will actually be accessed by the GPU (=pinned)
<alyssa_> We do a worst case allocation of 128MB, for example...
<alyssa_> If we're just running es2gears, maybe 1MB will ever get pinned (and then freed when the process dies)
<alyssa_> The other 127MB is reserved virtual memory but not backed by any physical pages
<alyssa_> If we're running STK, maybe 64MB will end up in use due to a really geometry heavy scene (and freed when we quit the game). Well, okay, the other 64MB is still free physically
<alyssa_> Memory usage is lowered from (worst_case * number of apps) to (sum{apps} (worst_case_for_given_app))
stikonas has joined #panfrost
stikonas has quit [Remote host closed the connection]
<alyssa_> Fun and games with AFBC!
<alyssa_> The goal is to have AFBC surfaces all the way down
<alyssa_> Quite a bit of work for that, but I'm already neck deep in it so
<alyssa_> First step is importing/exporting AFBC BOs
<alyssa_> I don't think this should be terribly hard..?