<bbrezillon>
couldn't push it yesterday because fdo was down
<bbrezillon>
icecream95: if that's more or less the same fix, you can add my R-b
<icecream95>
bbrezillon: Yep, it's that
<tomeu>
icecream95: guess CI passed because the OOB value read didn't change execution?
<bbrezillon>
and I guess you don't necessarily hit the issue if the compiler decides to optimize things to avoid dereferencing the live pointer if the index < tmp_count test returns false
<tomeu>
or that :)
<tomeu>
to catch these issues, we would need to run deqp with asan
<bbrezillon>
I have a debug build here, and I suspect it's compiling with -O0 (not sure)
<amonakov>
sure, -O0 would expose that
<icecream95>
This was with a -O3 -flto build on GCC 10.2
<bbrezillon>
then it's probably what tomeu said
<icecream95>
On 64-bit platforms the offset will be ~8GB, which is unlikely to be mapped
<bbrezillon>
also true
<bbrezillon>
I guess the only way to figure out is to compile with the same options/compiler CI does and compare the assembly :)
<icecream95>
That's made slightly harder by the CI artifacts not having any symbols. I guess I could use Ghidra…
<icecream95>
Ghidra is broken for me on Wayland, but it seems to work well enough with Xephyr
archetech has joined #panfrost
yann has joined #panfrost
stikonas has joined #panfrost
<icecream95>
I think that the compiler recognised that `live` will never be big enough for ~0 to be a valid index, so the `return ~0` from bi_get_node could be completely optimised out
<icecream95>
(live is allocated with N << 1 items, so cannot be bigger than 0xfffffffe)
<icecream95>
bbrezillon: tomeu: ^^
<icecream95>
(The size passed to mem_dup is 32-bit, so it will only go up to 0x7fffffff elements anyway)
robmur01_ has joined #panfrost
robmur01 has quit [Ping timeout: 264 seconds]
cowsay has quit [Read error: Connection reset by peer]
cowsay has joined #panfrost
archetech has quit [Quit: Konversation terminated!]
warpme_ has quit [Quit: Connection closed for inactivity]
nlhowell has joined #panfrost
<alyssa>
thanks marketing
<macc24>
is it the first time marketing is useful?
stikonas has quit [Remote host closed the connection]
<narmstrong>
alyssa: running mesa 5cc0d61088 on kernel v5.11-rc7 (+boris fixes + Lukasz dvfs fixes), did 340 runs on 4k while diffing cma & vmalloc entries, I don't see significant leaks
<narmstrong>
glmark2 master (47d586d783e998df9239f592984720466d8c59f1) with -bterrain on vim3 (g52) and ubuntu 20.04
stikonas has joined #panfrost
<narmstrong>
marketing magic calculus: offering 25% greater energy efficiency and 20% better performance density than Mali-G71, leading to a 40% performance improvement
<alyssa>
narmstrong: so maybe I need to upgrade my kernel and it magically will go away?
<narmstrong>
alyssa: who knows, I can try to check with your kernel if you give me a tree & config
stikonas has quit [Remote host closed the connection]
stikonas has joined #panfrost
<alyssa>
i have long since rm'd that directory ;P
<alyssa>
I really need a better way to deal with kernel compiles/upgrades
<narmstrong>
yeah, git branches are really useful in these cases !
<macc24>
alyssa: automate kernel updates?
<alyssa>
It's more that between storage requirements and CPU speed, building linux on a chromebook isn't something I like to do more than once every 6 months ;p
<macc24>
if it's kevin then just run cadmium on it, auto kernel updates from release are coming Soon™ :D
<macc24>
and i definitely won't brick your machine
<narmstrong>
downsides to not have a powerful dev machine, can't you have access to a build server somewhere ?
<alyssa>
narmstrong: Uhh
<macc24>
narmstrong: dealing with build servers is pita imo
<narmstrong>
macc24: not really a "build server" but a ssh shell to a shared machine with plenty of disk & cpus
<narmstrong>
when dealing with huge code sources and often full rebuild, it's a clear gain of time
<macc24>
still, i had a build "server" right next to me and after using that setup for a while i prefer to have something locally
<narmstrong>
macc24: yeah seems you don't use a chromebook as dev machine
<alyssa>
hi
<macc24>
i DO use a arm64 chromebook as my dev machine
<narmstrong>
oh ok
<narmstrong>
building linux from clean must take ages no ?
<macc24>
and i still prefer building linux on it than on my i7 laptop and copying over everything
nlhowell has quit [Ping timeout: 256 seconds]
<narmstrong>
oh ok, if it fits your workflow
<macc24>
40 minutes, mt8183 build times are similar to my xeon x5460 box actually
<narmstrong>
reasonable time !
Green has quit [Read error: Connection reset by peer]
Green has joined #panfrost
<narmstrong>
alyssa: anyway, if you want me to test a specific kernel version, I can...
<bbrezillon>
alyssa: any reason the blend shader lowering used in panfrost wasn't moved to src/compiler/nir ? I mean, if we omit the fact that it's using PIPE_ definitions for the logicop stuff, it's pretty generic, and we could add a enum for that one to shader_enums.h
<alyssa>
bbrezillon: No other drivers were interested at the time
<alyssa>
v3d has its own blend lowering code that's tied up in v3d-isms
<alyssa>
everyone else has hardware
<bbrezillon>
(some context, I'm trying to move that code to src/panfrost/util, but I want it to be gallium-independant)
BenG83 has joined #panfrost
<bbrezillon>
that means I have to redefine the logicop enum, and I was wondering if I should make it panfrost specific (PAN_LOGICOP_xxx) or try to define that one directly in shader_enums.h and move the lowering pass to src/compiler/nir/
<alyssa>
umm neither
<alyssa>
?
<alyssa>
PIPE_* includes are used in Vulkan drivers, it's just the gallium library itself that's off limits