alyssa changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - - Logs - <daniels> avoiding X is a huge feature
mateo` has quit [Ping timeout: 250 seconds]
mateo` has joined #panfrost
stikonas has quit [Remote host closed the connection]
vstehle has quit [Ping timeout: 245 seconds]
<alyssa----> Dang.
<alyssa----> Lima is a lot farther ahead than I thought :o
<anarsoul> alyssa----: lima still lacks a lot, e.g. we don't have control flow yet in neither gp nor pp
<anarsoul> and we don't have support for depth/stencil fbo
<alyssa----> anarsoul: I know, it's just... I'm slightly demotivated :(
<anarsoul> don't be
<alyssa----> You have onionbreading register-aware schedulers
<anarsoul> take a look at utgard ISA and be happy :)
<alyssa----> And RA supporting spilling
<alyssa----> And ppir RA supporting combining-components-into-single-vec4-register
<alyssa----> And partial update
<alyssa----> And tile loadbacks
<alyssa----> And the code cle--- okay, I do think the Panfrost cmdstream stuff is cleaner than Lima's, sorry :P But the compilers are suuuuper legible and good
<anarsoul> yuq did great job, yeah
<alyssa----> And I didn't :(
<anarsoul> you certainly did
* anarsoul looks at cf implementation in panfrost with envy
<alyssa----> anarsoul: The cf implementation is deeeeeply broken
<alyssa----> As in, causes irrecoverable system hangs in dEQP
<anarsoul> I'm not familiar with midgard ISA but I have suspicion that we can't do something similar in PP since ISA is pretty weird VLIW
<alyssa----> Hm?
<anarsoul> you just asked about ppir_codegen_field_size
<alyssa----> anarsoul: Midgard evoled from PP
<alyssa----> evolved
<anarsoul> these are bit sizes of different parts of instruction
<alyssa----> (What is this responding to?)
<anarsoul> your reply to "[PATCH v2 6/8] gallium: add lima driver"
<anarsoul> :)
<alyssa----> Oh!
<alyssa----> anarsoul: Anyways, I was just saying "add a comment above this, these are a bunch of magic numbers"
<anarsoul> agree
<alyssa----> Our dEQP results are suuuuper inconsinstent across sections
<alyssa----> Like some sections we're failing just about every test
<alyssa----> Others we're passing just about every test
<alyssa----> What??
<HdkR> alyssa----: That's pretty common. A lot of the time you fix one bug in a section and all of them fix
<alyssa----> Hope so
<HdkR> What's more annoying is when you have partial failure in a section since it is probably an edge :P
<alyssa----> HdkR: `texture` and `fbo` are major offenders, no surprise
<alyssa----> Running the fragment_ops group for the first time
<alyssa----> At least depth, stencil, and I think depth_stencil are at perfect scores
<alyssa----> Ohp, wait, first fail
<alyssa----> Doing well regardless :p
<alyssa----> Blend section is reeally bad since blend shaders aren't implemented on mainline
<alyssa----> There's just so many tests :D
<alyssa----> They are absolutely finding real bugs, though :)
<HdkR> mmhm
<alyssa----> sin/cos/tan all work good :D
<alyssa----> Inverse trig.... not so good >_<
<alyssa----> Needed to fix something, but now shaders.opertor.exponential.*
<alyssa----> is at 112/112 passing :)
<alyssa----> fsign is broken, but the other common functions are okay
<alyssa----> (by okay I mean all 100% passing, except fsign)
<alyssa----> shaders.operator.geometric.* is 100%, 116/116 :)
<alyssa----> shaders.operator.float_compare.* is 100%, 108/108 :)
<alyssa----> shaders.operator.int_compare.* is 100%, 108/108 :)
<alyssa----> shaders.operator.bool_compare.* is 100%, 30/30 :)
<alyssa----> shaders.const* is 543/543
<alyssa----> Admittedly NIR is doing the heavylifting there
<alyssa----> functional.shaders.discard.static_loop_texture is causing a lovely system hang
<alyssa----> That's my cue to sign off for the night :P
<alyssa----> Parches on the list, nini
vstehle has joined #panfrost
<daniels> pr\o/gress!
* tomeu has the feeling that what takes alyssa---- a weekend, it would take anybody else weeks
<tomeu> runtime PM working nicely :)
* tomeu tries hard to resist the urge to implement DVFS
<bbrezillon> daniels, alyssa----: I'll update the copyright
pH5 has joined #panfrost
<tomeu> bbrezillon: do you have any ideas on how we could discover this path at runtime?
<bbrezillon> tomeu: /sys/dev/char/ ?
<bbrezillon> if you know the /dev/dri/cardX you opened, you should be able to figure it out
<bbrezillon> tomeu: just use /sys/dev/char/226:${X}/device/power where X is the number you have in /dev/dri/{card,renderD}X
<bbrezillon> alyssa----: just to make sure, is it useful that I support the old way of dumping counters (->{enable,dump}_counters())
<bbrezillon> ?
<bbrezillon> I can do it, it's just that it involves yet another level of conversion (new layout -> old layout)
MoeIcenowy has quit [Quit: ZNC 1.6.5+deb1+deb9u1 -]
MoeIcenowy has joined #panfrost
<tomeu> bbrezillon: nice, thanks!
<tomeu> bbrezillon: btw, is there any perf counter that would give me the utilization?
<bbrezillon> tomeu: you want to use the metric to control devfreq?
<tomeu> bbrezillon: was thinking of it, but I think it would be better if it was integrated in drm-sched
<tomeu> well, not sure of that actually, as maybe there's hw-specific knowledge needed to figure out a single utilization time based on the several cores
<tomeu> as not all cores are the same
<bbrezillon> tomeu: ^
afaerber has quit [Quit: Leaving]
afaerber has joined #panfrost
<alyssa----> tomeu: *blushes*
<alyssa----> I don't work on the Sabbath, you know.. gotta adjust for that :P
<tomeu> amazing, you even have time to rest!
<alyssa----> =P
<alyssa----> That's why my productivity is so much better, silly! Not burned out 24/7
<alyssa----> At least I hope that's it..
<alyssa----> bbrezillon: Don't worry about that. The dump_counters stuff was an extremely quick and dirty hack so I could just get the data. While it's essential that the counters are _there_, when I'm not looking at performance I'm not using them and neither is anyone else.
<alyssa----> So expose whatever interface is most convenient and feel free to throw out the old code :)
<alyssa----> tomeu: No idea what runtime PM is, but congrats! :)
<tomeu> should be handy for when we start to go around the world with panfrost-powered laptops
<tomeu> DVFS also works now
<alyssa----> Awesome!!
<urjaman> runtime PM is suspending, right?
<tomeu> yep, automatically on idle
<tomeu> haven't checked s2ram yet
<urjaman> ah i thought it was basically only suspend to ram
<tomeu> IME, s2ram is always broken in mainline on ARM, and I don't want to have to debug it for the Nth time only to have it broken in the next release
<tomeu> but once this is in mainline, I will want to have igt tests that check that things keep working across suspends, and have those tests in kernelci
<tomeu> then hopefully it will never ever be broken again in a release :)
<tomeu> actually, it works just fine on the veyron
<daniels> igt already has suspend tests, which use the RTC to schedule a wakeup
<tomeu> yeah, but if we want to test that panfrost works across resumes, we need to do the same in our panfrost-specific tests
<daniels> sure
<tomeu> but it's cool to have helpers for that
<tomeu> it made super-easy to write tests for runtime PM
<urjaman> my current stats on the veyron (with 5.1 rc1 plus my patches but they dont touch suspend things... i actually didnt get to testing rc2 much yet tho i'm running it...) are that suspend works roughly 4/5 of the time :P
<urjaman> once the brcmfmac died... which is great since my previous experience was that it always died
<tomeu> hehe, sounds about right :)
<tomeu> guess we could extend this test to check that all the devices that were bound before suspend are still there afterwards:
<tomeu> right now it seems to be always passing on the jaq:
<urjaman> also in news my 4GB C201 has u-boot that boots arch linux from eMMC... actually boots decently fast and without any extra hassle (poking keys or getting blinded by a white splash)
<tomeu> I was surprised by how fast the veyron boots from emmc
<tomeu> building mesa in it isn't that painful, either
<tomeu> I should get my hands on a kevin at some point
<urjaman> oh also i painted the C201's blue parts black :P
<urjaman> oh "neat"
<urjaman> suspending and resuming again can apparently revive the brcmfmac if it dies
MoeIcenowy has quit [Quit: ZNC 1.6.5+deb1+deb9u1 -]
MoeIcenowy has joined #panfrost
pH5 has quit [Quit: bye]
stikonas has joined #panfrost
afaerber has quit [Quit: Leaving]
pH5 has joined #panfrost
tlwoerner has quit [Quit: Leaving]
tlwoerner has joined #panfrost
stikonas has quit [Remote host closed the connection]
afaerber has joined #panfrost
<alyssa----> ...How does suspend work on RK3399 anyway?
<alyssa----> Oh, `systemctl suspend` I guess?
<alyssa----> Impressive
<anarsoul> does it work?
<alyssa----> anarsoul: On CrOS kernel, yes well
<alyssa----> On 4.20-with-a-lot-of-stuff-broken, yes kinda
<alyssa----> anarsoul: Works great on my mainline system that I'm trying to get everything working on
<alyssa----> Oh, and lid closing works too, bonus!
<mmind00> alyssa----: one of the arm devs (maz in #linux-rockchip) does use kevin+mainline as daily work machine .. so he gernally notices suspend issues early
<alyssa----> Oh, that's awesome!