<Putti>
Hi, would it make sense to implement CMA / continuous memory allocation for lima? Now it does non-contiguous allocations and uses iommu. I would like to use lima render node to allocate SCANOUT compatible buffers so I can display the renderings directly with the display controller that seems to only support CMA memory.
<Putti>
In mesa3d I see some other drivers using kmsro in combination with the render node allocation but not quite sure how that works, so if lima would benefit from CMA allocator I would gladly implement it.
monstr has quit [Remote host closed the connection]
Putti has quit [Read error: Connection reset by peer]
Putti has joined #lima
<daniels>
damn, they left :(
<daniels>
Putti: oh no, you didn't leave, tab complete just didn't work ...
<daniels>
anyway, renderonly is the exact solution for what you're looking for
<daniels>
it doesn't make sense to force lima to use CMA when lima doesn't need to use CMA
<daniels>
for example, you don't need to allocate contiguous buffers for e.g. every FBO
<Putti>
daniels, sorry, my connection dropped
<daniels>
no problem! it's IRC's fault, not yours :)
<daniels>
anyway, the way renderonly/kmsro works, is that your display controller implements dumb buffer allocation via CMA
<daniels>
when a buffer actually needs to be displayed on the display controller (e.g. via GBM), it allocates a buffer via the KMS driver, exports it into a dmabuf, then imports it into lima
<Putti>
daniels, right, so on android platform I guess we need to implement DRM magic for card node since it is used by the hardware composer process already
<daniels>
ah ...
<Putti>
I think one of the reasons for render nodes was so that drm magic would not be needed
<Putti>
and now it would be needed again because of kmsro
<daniels>
that was for etnaviv on imx6 which has the exact same constraint as you
Moiman_ has joined #lima
<dllud>
daniels, sorry to jump into the conversation, I've also been helping Putti on this.
<dllud>
Current master branch of gbm_gralloc from robherring (what we're using), seems to have most (all?) of those commits.
<dllud>
Did etnaviv render node had the same restriction of being unable to alloc contiguous mem as lima does?
<dllud>
Do you know in particular what was their solution?
<Putti>
daniels, as far as I can tell we have here 3 competing ways of doing memory allocations: 1) card+render node meanings using the kmsro driver and gpu driver 2) using ION gralloc for everything really flexible as far as I can tell 3) making the render node able to do contiguous and non-contiguous allocations
<Putti>
daniels, and the issue I think we have is that we have not agreed as a community which design route to go forward
<Putti>
daniels, maybe some folks already have a view on this but this information has not propagated down to the wider community
<Putti>
should we continue discussion on #dri-devel?
<daniels>
dllud: don't be sorry! thanks for the input :)
<daniels>
dllud: etnaviv rendernode on that platform is indeed using an IOMMU/SG, whereas the display requires contiguous
<daniels>
Putti: #dri-devel sounds good - it's definitely a long-standing problem ever since gralloc got made into a service with Treble
<daniels>
the SPURV tree isn't intended as a real upstream solution; it's a proof-of-concept that we (Collabora) periodically refresh in our own time away from client projects, with the goal of understanding the gap between Android and mainline, and having an accurate to-do list of problems to solve ... actually solving those problems is a slightly harder task and we're taken up with other work (like Panfrost) atm :)
<daniels>
also, to be honest, non-IOMMU IP blocks are not a very high priority for us atm
<dllud>
daniels, thanks, so exact same situation. I guess it's worth to ask Tomeu Vizoso what solution he went for.
<dllud>
Is he on the room? Do you know his nick?
<daniels>
he's on #dri-devel as tomeu, but I can tell you that whatever solution he went with is in the gbm_gralloc / drm_hwc / frameworks trees in those repos :P
<daniels>
(we work together)
<dllud>
Thanks, I'll dig in there then!
<daniels>
good luck! and yeah, you can ping him on #dri-devel and he'll answer, though he has just finished his day so you may have to wait until tomorrow
<Putti>
daniels, I started the discussion on #dri-devel now, thanks a lot for the confirmation for our ideas, now we know at least that we are on the right track.
<daniels>
no problem! good luck with it, it's cool to see more people using the upstream stack :)