alyssa changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - Logs https://freenode.irclog.whitequark.org/panfrost - <daniels> avoiding X is a huge feature
rcf has quit [Ping timeout: 268 seconds]
rcf has joined #panfrost
vstehle has quit [Ping timeout: 245 seconds]
popolon has quit [Quit: WeeChat 2.5]
fysa has joined #panfrost
fysa has quit [Ping timeout: 258 seconds]
fysa has joined #panfrost
NeuroScr has quit [Quit: NeuroScr]
_whitelogger has joined #panfrost
vstehle has joined #panfrost
_whitelogger has joined #panfrost
warpme_ has joined #panfrost
warpme_ has quit [Quit: warpme_]
afaerber has quit [Quit: Leaving]
raster has joined #panfrost
warpme_ has joined #panfrost
warpme_ has quit [Quit: warpme_]
popolon has joined #panfrost
hexdump0815 has joined #panfrost
<hexdump0815> guillaume_g, rtp, Depau: as quite a few people here started to play around with their now abandoned snow chromebooks by google i have put together my notes, patches etc. for running linux nicely on that device at https://github.com/hexdump0815/linux-mainline-and-mali-on-samsung-snow-chromebook
<hexdump0815> some of this information is not that easy to discover, like modifying the default boot options permanently, chainloading mainline u-boot etc.
<hexdump0815> i even got the mali blob working on it for mainline - maybe this is useful for comparing it to panfrost or debugging and reverse engeneering?
<hexdump0815> i also tried panfrost on it using the patches and dts posted by guillaume_g some time ago here, but i did not even get kmscube working (panfrost is detect well at least on bootup)
<hexdump0815> guillaume_g: can you maybe put up the exact patches and dts files you used for it somewhere?
hexdump0815 has quit [Remote host closed the connection]
stikonas has joined #panfrost
warpme_ has joined #panfrost
warpme_ has quit [Quit: warpme_]
warpme_ has joined #panfrost
raster has quit [Remote host closed the connection]
rcf1 has quit [Read error: Connection reset by peer]
warpme_ has quit [Quit: warpme_]
<alyssa> bbrezillon: "Queued"?
warpme_ has joined #panfrost
<bbrezillon> alyssa: applied to the master branch
<bbrezillon> that's only patches 1 to 8, will send a v4 for the rest
warpme_ has quit [Quit: warpme_]
<alyssa> bbrezillon: Alrighty!
<bbrezillon> alyssa: I'm working on the dep graph thing
<bbrezillon> and I was wondering sharing the tiler_head/scratchpad was okay in that case (I guess it's not)
<bbrezillon> *if sharing
<bbrezillon> *tiler_heap
rcf has quit [*.net *.split]
rcf has joined #panfrost
<alyssa> bbrezillon: It's not
<alyssa> The tiler data structures (tiler heap, polygon lists) represent the geometry on-screen
<alyssa> The tiler is a fixed-function unit that ingests the screen-space vertices (from the vertex shader) and outputs those data structures, which are read in the FRAGMENT job.
<alyssa> Multiple render targets of the same FBO (panfrost_batch) all share geometry, so they share the tiler structures.
<alyssa> Distinct FBOs (distinct panfrost_batches) generally draw different geometry, or at least the same geometry at different resolutions (for mipmapping), so they cannot share the structures.
<alyssa> Hence, the tiler structures need to be allocated per-batch.
<alyssa> As for `scratchpad`, that's what ends up being used as the stack for shaders.
<alyssa> For shaders that don't spill registers (or use temporary arrays, although we lower them to registers at the moment which is slow..), the buffer is ignored.
<alyssa> For shaders that do use the stack, the long-story-short is yes, per batch as far as I know.
stikonas has quit [Remote host closed the connection]
urjaman has quit [Read error: Connection reset by peer]
stikonas has joined #panfrost
NeuroScr has joined #panfrost
stikonas has quit [Quit: Konversation terminated!]
stikonas has joined #panfrost
popolon has quit [Ping timeout: 250 seconds]