warpme_ has quit [Quit: Connection closed for inactivity]
stikonas has quit [Ping timeout: 272 seconds]
atler has quit [Killed (barjavel.freenode.net (Nickname regained by services))]
atler has joined #panfrost
<alyssa>
icecream95: TIL too. Is that not normal?
<alyssa>
Possibly we're leaving nontrivial perf on the table...
<alyssa>
The extra blit will be cheap next to not thrashing the cache..
<alyssa>
(+ unaligned accesses reaking things + ..)
vstehle has quit [Ping timeout: 264 seconds]
<icecream95>
With AFBC, there's no problem with unaligned accesses, and RGBX needs one extra byte per 4x4 block IIRC
davidlt has joined #panfrost
<WoC>
Is there OpenCL for any of the Mali chips ?
<macc24>
perhaps
<HdkR>
With the blob yes
<HdkR>
They even supported the full desktop spec rather than the mobile specific one
<macc24>
wow, a blob that actually supports opengl and not gles
<HdkR>
eh, It's called "Embedded Profile"
<HdkR>
Still part of the same spec, just a bunch of things removed
<WoC>
HdkR, i noticed it's not supporting T860
<HdkR>
It's up to the device vendor usually to ship the CL driver. Which in the case of Android usually means it isn't shipped
<HdkR>
Google really hates OpenCL for whatever reason
<WoC>
Hmmm, my pixel 4 xl has OpenCL preinstalled
<WoC>
only apps that use it are benchmarks though...
<HdkR>
Huh, I wonder if they reversed that decision to try and force everyone to use Renderscript
<WoC>
i reckon OpenCL is supported by more platforms...
<HdkR>
Also Qualcomm only supports the Embedded profile last I knew
<HdkR>
Because supporting FP64 sucks
<icecream95>
WoC: There are WIP patches for OpenCL on Panfrost, but it's probably not very usable yet
<macc24>
HdkR: why
<HdkR>
macc24: Why does supporting FP64 suck?
<macc24>
yep
<HdkR>
Because you either need to do softfloat, or you need to burn a bunch of die space implementing it in hardware for something that "no one" will use
<HdkR>
macc24: Also see Nvidia V100/A100 die sizes being effectively at aperture limit for the fab process they are generated on. ~820mm^2
<macc24>
HdkR: radeon 6470m has no fp64 too
ids1024 has quit [Ping timeout: 258 seconds]
<HdkR>
yep
<macc24>
i ended up overriding gl version to 4.5 iirc on that laptop, still waiting to see anything bug out due to missing fp64
<HdkR>
"With current technology, a wafer is 10" in diameter (254mm), the aperture size limiting the die size is around 900 mm^2, so a chip is at most about 30x30mm."
<HdkR>
You'd most likely need to run some professional software, Like CAD
<macc24>
i remember blender working on it, didn't test freecad as it's not the best to use on touchpad, lightly said
ids1024 has joined #panfrost
vstehle has joined #panfrost
rando25892 has quit [Ping timeout: 264 seconds]
rando25892 has joined #panfrost
_whitelogger has joined #panfrost
rando25892 has left #panfrost [#panfrost]
chewitt has quit [Quit: Adios!]
cowsay has joined #panfrost
cowsay_ has quit [Ping timeout: 256 seconds]
warpme_ has joined #panfrost
afaerber has quit [Quit: Leaving]
alpernebbi has joined #panfrost
stikonas has joined #panfrost
afaerber has joined #panfrost
afaerber has quit [Remote host closed the connection]
afaerber has joined #panfrost
afaerber has quit [Remote host closed the connection]
afaerber has joined #panfrost
afaerber has quit [Remote host closed the connection]
afaerber has joined #panfrost
afaerber has quit [Remote host closed the connection]
afaerber has joined #panfrost
afaerber has quit [Remote host closed the connection]
afaerber has joined #panfrost
afaerber has quit [Remote host closed the connection]
karolherbst has joined #panfrost
afaerber has joined #panfrost
zkrx has quit [Ping timeout: 276 seconds]
zkrx has joined #panfrost
Net147_ has joined #panfrost
Net147 has quit [Ping timeout: 246 seconds]
* robmur01
is guilty of using FreeCAD on RK3399 for actual productive purposes...
<alyssa>
Productive? Blasphemy
<alyssa>
narmstrong: TBH I'm more concerned by these CMA related allocations leaking?
<alyssa>
as in, if I run glmark2-es2-drm repeatedly, closing each time, it will fail after enough runs
<alyssa>
despite memory use being constant
narmstrong has quit [Read error: Connection reset by peer]
warpme_ has quit [Read error: Connection reset by peer]
daniels has quit [Read error: Connection reset by peer]
austriancoder has quit [Read error: Connection reset by peer]
jstultz has quit [Ping timeout: 264 seconds]
robher has quit [Ping timeout: 265 seconds]
ric96 has quit [Ping timeout: 265 seconds]
kinkinkijkin has quit [Read error: Connection reset by peer]
lvrp16 has quit [Write error: Connection reset by peer]
narmstrong has joined #panfrost
daniels has joined #panfrost
ric96 has joined #panfrost
warpme_ has joined #panfrost
jstultz has joined #panfrost
kinkinkijkin has joined #panfrost
austriancoder has joined #panfrost
robher has joined #panfrost
<narmstrong>
alyssa: interesting, it should not leak :-)
<alyssa>
narmstrong: indeed...
<alyssa>
who do we blame? panfrost or aml? :p
<narmstrong>
alyssa: which kernel ?
<daniels>
icecream95: the GLSL bit doesn't even cover the full pain of CL FP
<narmstrong>
alyssa: the meson drm driver does 0 memory management
* daniels
shudders at the memory of trying to get a CL implementation compliant with the CTS conversions suite
<narmstrong>
alyssa: but maybe a callback or flag is missing somewhere so it’s worth checking
<narmstrong>
alyssa: is it easy to reproduce with like kmscube ?
<alyssa>
narmstrong: on an older 5.x
<alyssa>
kmscube, no idea
<alyssa>
glmark2-es2-drm -bterrain repeatedly, especially with a 4k monitor (but 1080p will also trigger after enough times)
<alyssa>
daniels: I didn't think the BIforst CL blob advertises fp64?
<daniels>
alyssa: sensible
<daniels>
and IIRC the painful corner-case round/trunc/etc bits are implemented in hardware which makes the implementation infinitely less pain
<alyssa>
yep, fma/fadd/fmax/fmin all have round mode selection for 'free'
<robmur01>
ooh, G71... maybe I'll try banging my head against the FPGA one more time...
<alyssa>
robmur01: not actually tested
<alyssa>
i mean. the MR is tested.
<alyssa>
but tested against G52, so 'should' work
lvrp16 has joined #panfrost
<narmstrong>
alyssa: ok, with Mesa master ?
enunes has quit [Remote host closed the connection]
<alyssa>
narmstrong: yeah
atler has quit [Ping timeout: 256 seconds]
atler has joined #panfrost
rak-zero has quit [Ping timeout: 256 seconds]
jolan has quit [Quit: leaving]
alpernebbi has quit [Quit: alpernebbi]
jolan has joined #panfrost
enunes has joined #panfrost
lvrp16 has quit [Read error: Connection reset by peer]
austriancoder has quit [Read error: Connection reset by peer]
austriancoder has joined #panfrost
lvrp16 has joined #panfrost
Green has quit [Ping timeout: 264 seconds]
archetech has joined #panfrost
Green has joined #panfrost
<alyssa>
daniels: re earlier discussion, I found a fix for about 1/2 of the regression in cycle count on Midgard
<alyssa>
so what remains is line noise and the new opt comes out ahead
<alyssa>
unfortunately it looks like I have some regressions [for correctness] to deal with first
<daniels>
alyssa: ooh, nicely done!
<alyssa>
and the regression looks like something silly, rerunning cts
<jstultz>
robmur01: huh, thanks for the heads up, i'll try to read over the log here, have to look into it.