alyssa changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - https://gitlab.freedesktop.org/panfrost - Logs https://freenode.irclog.whitequark.org/panfrost - <daniels> avoiding X is a huge feature
vstehle has quit [Ping timeout: 248 seconds]
LinguinePenguiny has quit [Ping timeout: 245 seconds]
TheCycoONE has quit [Ping timeout: 258 seconds]
TheCycoONE has joined #panfrost
stikonas_ has quit [Remote host closed the connection]
maciejjo has quit [Ping timeout: 245 seconds]
_whitelogger has joined #panfrost
vstehle has joined #panfrost
Elpaulo has quit [Read error: Connection reset by peer]
Elpaulo has joined #panfrost
pH5 has joined #panfrost
somy has joined #panfrost
maciejjo has joined #panfrost
fysa has joined #panfrost
maciejjo has quit [Quit: leaving]
fysa has quit [Ping timeout: 272 seconds]
fysa has joined #panfrost
maciejjo has joined #panfrost
<tomeu> alyssa: CI run from current master: https://gitlab.freedesktop.org/tomeu/mesa/-/jobs/342356
<tomeu> that's 91 tests fixed, but still 268 regressions compared with last week
<tomeu> guess the patches in the ml fix more regressions?
stikonas has joined #panfrost
<narmstrong> tomeu: the Khadas VIM2 should arrive shortly on kci, but the "Nexbox A1" is already alive https://kernelci.org/boot/meson-gxm-nexbox-a1/ what is needed to run basisc panfrost tests on it ?
<tomeu> narmstrong: regarding the igt tests that target the kernel UABI, gtucker is working on some dependencies
<tomeu> narmstrong: I think he has a new rootfs that contains a version of igt that already has the panfrost tests
<tomeu> not sure if that's in production yet
<tomeu> then we need to change the lava job definition to actually run them
stikonas has quit [Remote host closed the connection]
<narmstrong> tomeu: I have a doubt, panfrost doesn't seem to be selected from the arm64 defconfig
<tomeu> ah yes, I sent patches for that
<tomeu> that's also a dependency
<narmstrong> cool, great
<tomeu> guess we can ping around so they get into linux-next sooner
<narmstrong> So you know if lima has been enabled aswell ?
<tomeu> I don't remember seeing it in the defconfigs
<narmstrong> You should have CCed linux-amlogic, so we could have taken it
<narmstrong> I will send the lima patches
<tomeu> yeah, I don't know how the changes to defconfigs flow these days
<tomeu> regarding deqp tests, https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/gallium/drivers/panfrost/ci/arm64.config needs to be changed to also include your hw
<narmstrong> each armsoc maintainer takes it, or you can send To arm@kernel.org (not in Cc)
<tomeu> but take care not to include too much or we'll hit limits in the chromebooks when tftping the FIT
<tomeu> oh, I see
<tomeu> narmstrong: do I need to do anything else regarding the defconfigs, or can you take care of it?
<narmstrong> tomeu: I'll reply to help them getting applied
<tomeu> thanks a bunch
<tomeu> back to deqp, we also need to change the parts of https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/gallium/drivers/panfrost/ci/gitlab-ci.yml that generate the lava jobs to also generate for your device-type
<tomeu> and then submit it to your lab if the appropriate token is set in the repo
<tomeu> narmstrong: though if it has the same GPU that the kevin has, then there's probably not much to be gained by running the deqp tests
<narmstrong> tomeu: it has a T820, AFAIK no other soc has this gpu
<tomeu> ah true, the kevin has a T860
<tomeu> cool then
<tomeu> gtucker: what's still missing so we can run panfrost tests in igt in kernelci?
<gtucker> well first I need to get the buster-igt rootfs all done and rolled out in production
<gtucker> which is nearly there, just that I think it can be reduced in size a bit, might do that as a follow-up optimization
<gtucker> then we need to add the test cases to run for panfrost, enable them to run on all the relevant platforms, and define a few things about the email reports
<gtucker> like, should it be one big report with everything-igt, or one for panfrost while keeping the current one for general kms tests
<gtucker> and the list of recipients
<tomeu> narmstrong: if you could help with some of the above, it would be awesome
<gtucker> also atm we're not building drm-tip in kernelci
<narmstrong> tomeu: i'd like, but no idea how
<gtucker> it's something we should start doing to have better igt coverage
<tomeu> narmstrong: maybe Corentin?
<tomeu> gtucker: ah, that's a good one
<tomeu> would be awesome if we could detect regressions before they reach linux-next
<narmstrong> I'll ask him to join
<tomeu> saw in github that he has some kernelci-fu :)
montjoie has joined #panfrost
<gtucker> drm-tip used to be built, but people complained about the big volume of tests it generated and also the high level of boot breakage, and apparently drm folks were not really reading the kernelci reports anyway
<narmstrong> here he is, our lab master :-)
<gtucker> are there any Mali devices in lab-baylibre?
<narmstrong> gtucker: a lot !
<gtucker> cool
<tomeu> gtucker: do you remember why it generated a lot of tests?
<gtucker> tomeu: well yes, because the branch was being updated a lot
<narmstrong> gtucker: if these tests can be enabled on a limited number of devices, it will lower the reports, no ?
<gtucker> narmstrong: yes but boot tests are for all devices
<gtucker> one missing feature is a parameter to set the frequency at which we should be sampling branches
<gtucker> at the moment, it's every hour for everything
<tomeu> gtucker: could be somehow cap the frequency to daily or so?
<tomeu> ah, that
<gtucker> tomeu: that's what I mean, yes
<gtucker> so at the moment, if drm-tip gets updated 10 times in a day, you get 2000+ builds and 1000+ boot tests and...
<gtucker> also I think we should build a subset of configs on drm-tip
<tomeu> sounds like a good compromise, and also something that is probably needed anyway for kernelci to keep growing
<gtucker> yes
<gtucker> it's one of the remaining things that need to be done before we have actually functional kernelci (imo)
<gtucker> in any case, we can trial things out easily on staging.kernelci.org
<tomeu> ok, guess we can make a list of tasks and see who can pick what
<narmstrong> it would be a good start !
<gtucker> yes if we want to share the work between Collabora and BayLibre (and others), we could have a list of things as Github issues
<gtucker> so far I've just started an internal backlog
<gtucker> last week :p
<narmstrong> :-p
<gtucker> one thing to keep in mind is the end result: how to organise test cases across all igt runs, and modalities for the email reports
<gtucker> for example, I believe it would make sense to have a dedicated report with the panfrost results
<gtucker> so people who care about panfrost don't have to dig in the report to find them amongst say, i915 things and kms tests for display drivers...
<narmstrong> yep it's preferable
<gtucker> ok
<tomeu> well, as a maintainer, I only care about the regressions report
<tomeu> whcih I guess is already specific enough?
<gtucker> tomeu: I'm talking about regressions reports
<gtucker> but if you're a panfrost maintainer, you don't necessarily care about i915 or display driver regressions
<tomeu> so it's now being sent too widely?
<tomeu> yeah, but why would I get a regression about i915?
<gtucker> we're not sending them as yet, panfrost tests don't exist in kernelci
<gtucker> that's what I'm talking about :)
<tomeu> was referring to the algorithm for selecting recipients for a regression report
<gtucker> ah, you mean bisection reports
<gtucker> ok so the way it works is:
<tomeu> yeah, sorry
<gtucker> first there are some builds, and you can get a builds report
<gtucker> then there are some boots, typically run on _every_ platform, and you can get a boots report
<gtucker> then you can run any other kind of tests, and for panfrost that's still to be defined but typically the panfrost cases from igt on Mali-powered platforms
<gtucker> and you get an email report with information about what went wrong in the tests
<tomeu> yeah, I know not everybody agrees with this, but for me the perfect UI is what 0-day has had for a long time
<tomeu> you get an email if you break something, and if you have been related to the patch that broke it, also get an email
<gtucker> every test case that used to pass with an earlier revision and starts failing on the same kernel branch results in a regression
<gtucker> and each regression is reported in the email report
<tomeu> the other reports, dashboards, etc are undoubtly useful to other people
<gtucker> test reporting in kernelci is still in the early days, it's mostly up to be defined
<gtucker> so I'm explaining what we have now, and we will certainly have to make it evolve to address the needs of the audience
<gtucker> for each regression found, a bisection can be run
<gtucker> at the moment, only boot regressions are being bisected, but test regressions can be enabled with a bit more work
<gtucker> test reports are sent to a fixed list of recipients, so that's what my question was about re panfrost-specific reports
<tomeu> yeah, I think automated bisection is key, as otherwise you cannot know who broke it and then you cannot narrow the list of recipients
<gtucker> bisection reports are sent to people related to the commit it finds
<tomeu> so you get the problem you mentioned before of having too mcuh stuff sent to too many people
<tomeu> as a maintainer, I don't see much value in those reports, only in the bisection report
<gtucker> tomeu: yes, but there are many steps we need to go through until we've reached that
<tomeu> ok, then I'm fine with a regression report being sent to a fixed list of interested people
<gtucker> cool
<tomeu> but it's the automated bisection what makes this really useful :)
<gtucker> what would be really helpful would be some constructive feedback as kernelci starts rolling things out
<gtucker> i.e. whether the way tests are run is good or not, and how regressions are found
<gtucker> then gradually we can build on top of that and have bisections
<tomeu> gtucker: I think we are the right community to give that kind of feedback :)
<gtucker> awesome :)
<gtucker> narmstrong, montjoie: do you guys have any x86 device on which to run some i915 igt tests in your lab?
<gtucker> I think doing a little bit of this would be good as a reference
<montjoie> gtucker: not yet, but if needed we could try to add one
<gtucker> montjoie: ok, I was just wondering. we have some Minnowboards which might be enough
<montjoie> we have one also
<gtucker> nice
<montjoie> but I need some lab rework before putting it
yann has joined #panfrost
raster has joined #panfrost
afaerber has joined #panfrost
tgall_foo has quit [Ping timeout: 248 seconds]
TheKit has quit [Ping timeout: 245 seconds]
TheKit has joined #panfrost
NeuroScr has quit [Remote host closed the connection]
tgall_foo has joined #panfrost
<shadeslayer[m]> tomeu: alyssa: any chance you had a look at my branch?
<tomeu> shadeslayer[m]: looking now
<shadeslayer[m]> Thanks!
<tomeu> regarding panfrost_job_set_requirements, the purpose of that function seems to be only so that the context can tell a job that it's now the good moment to figure out the requirements
<tomeu> but wonder if that knowledge isn't something that belongs to job, not to context
<tomeu> but what the context does know that panfrost_job cannot is that the client has submitted a draw
<tomeu> so maybe some of the draw logic should be added to panfrost_job, and that would include setting the requirements
<tomeu> moving the draw logic into panfrost_job sounds to me like something that should happen in the later stage of the refactoring, though
<tomeu> so basically, do what you did with panfrost_job_clear, but for draws as well
tgall_foo has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
tgall_foo has joined #panfrost
<shadeslayer[m]> ack
<shadeslayer[m]> tomeu: we could move the set requirements into the job initialization
<tomeu> shadeslayer[m]: do we already know the requirements at that point?
<shadeslayer[m]> but I'm not sure if ctx->rasterizer->base.multisample could change
<shadeslayer[m]> tomeu: not sure. no
<tomeu> that's what we need to find out :)
<shadeslayer[m]> tomeu: not really sure how to do that, but pointers welcome :)
<tomeu> shadeslayer[m]: git grep "multisample = "
<shadeslayer[m]> tomeu: nothing, I think maybe MSAA might be better?
<shadeslayer[m]> meh, too much noise
<tomeu> to me, it shows that it's set in st_update_rasterizer
<tomeu> I'm not sure when that's called
<tomeu> maybe when the client calls glEnable( GL_MULTISAMPLE ) ?
<shadeslayer[m]> I'll take a look at how other drivers do it later tonight I guess
<alyssa> shadeslayer[m]: Linked lists look good
<alyssa> The sprunge stuff looks ok..?
<tomeu> shadeslayer[m]: good idea!
<alyssa> tomeu: I thought I pushed the more regression fixes
<shadeslayer[m]> <alyssa "The sprunge stuff looks ok..?"> I committed the sprunge stuff :)
<shadeslayer[m]> tomeu: vc4 sets the MSAA field to true when it calls vc4_get_job
<tomeu> so on job creation?
<alyssa> shadeslayer[m]: The series looks good!
<shadeslayer[m]> more like when looking up a job tied to a fbo?
<alyssa> shadeslayer[m]: Just, you know, clearning->clearing and panforst->panfrost ;)
<tomeu> shadeslayer[m]: well, guess there's a question of when that happens
<shadeslayer[m]> * Returns a vc4_job struture for tracking V3D rendering to a particular FBO.
<shadeslayer[m]> <alyssa "shadeslayer: Just, you know, cle"> I'm missing context for that one :D
<shadeslayer[m]> I've got to leave for a bit, I'll be back later tonight :)
<alyssa> shadeslayer[m]: Commit message typos :)
<alyssa> Harmless but still :)
herbmillerjr has quit [Quit: Konversation terminated!]
herbmillerjr has joined #panfrost
<tomeu> bnieuwen2uizen: I also think that there's no point in getting the CTS to pass just for the sake of it, but it's a reasonably easy way to catch most regressions
<tomeu> and sometimes getting specific tests to pass is a convenient way of testing during development
<HdkR> Don't underestimate the power of unit tests
<Lyude> ^^^
<bnieuwen2uizen> tomeu: yeah, just saying that until you want go for conformance you may want to prioritize them
<alyssa> bnieuwen2uizen: Tomeu's silly deqp/ci keeps finding regressions in all the bad code I push, it's kind of a pain ;P
<bnieuwen2uizen> alyssa: so does your project include adding gnome-shell to the CI to make sure it keeps working? :P
<alyssa> Code needs to *keep* working? What? Why?
<HdkR> haha :D
yann has quit [Ping timeout: 245 seconds]
<anarsoul> alyssa: I'm lazy again to dig through panfrost code, so how do you handle constants for midgard?
<anarsoul> I assume you can add up to 2 vec4 constants to any instruction just like in utgard pp?
afaerber has quit [Quit: Leaving]
stikonas has joined #panfrost
raster has quit [Remote host closed the connection]
NeuroScr has joined #panfrost
<alyssa> anarsoul: Not quite. We can attach a single 16-bit constant to any instruction
<alyssa> And we can attah a vec4 to any "bundle" (group of 1-5 alu instructions)
<alyssa> So when we see a load_const, we add it to a hashtable, we attach in a spearate pass, and finally let the scheduler deal with the end
<anarsoul> I see
<anarsoul> I guess I'll have to deal with it differently in lima
<anarsoul> registers are scarce resource, so it's better to duplicate load_const nodes for each successor
<HdkR> Everyone needs more registers
<anarsoul> HdkR: Utgard PP has only 6 vec4
<HdkR> Tiny :D
<anarsoul> but constants can be attached to any instruction (PP is VLIW, so single instruction can contain a lot of ops)
<anarsoul> so it's beneficial not to use registers for constants
<alyssa> anarsoul: Oh, I use instructions and op interchangable
<alyssa> What you call an instruction I call a bundle
<HdkR> Good this is that as more applications use ES3+ features and UBOs, more spacce frees up in Bifrost's uniform register space for lifting constants
<HdkR> Bad thing is that UBOs require load instructions D:
<HdkR> Maybe Valhall gains the Nvidia approach of being able to directly address the UBOs from the isa encoding
alyssa---- has joined #panfrost
<alyssa----> Work machine softbricked. Send help. Blame GNOME3 and/or systemd. :P
<HdkR> Dang it gnome3
somy has quit [Ping timeout: 252 seconds]
alyssa---- has quit [Quit: leaving]
pH5 has quit [Quit: bye]
<alyssa> shadeslayer[m]: Getting this backtrace https://people.collabora.com/~alyssa/memory-corruption.txt from valgrind on `glmark2-es2-wayland -bdesktop`
<alyssa> Looks like a use-after-free from the job handling, you might want to take a peak for your series.
<HdkR> oo, look at that GL error
<shadeslayer[m]> <alyssa "Looks like a use-after-free from"> hm, I'll take a look, but those line numbers look weird
<alyssa> shadeslayer[m]: Sorry, it's from my own branch. Try running valgrind yourself? :p
<shadeslayer[m]> alyssa: gonna be difficult, I don't have a device :P
<alyssa> shadeslayer[m]: Well, we should fix that! :p
<shadeslayer[m]> Hopefully I'll have access to a pinebook pro in 4 weeks or so
empty_string has quit [Ping timeout: 258 seconds]
stikonas has quit [Read error: Connection reset by peer]
stikonas has joined #panfrost
rhyskidd has quit [Quit: rhyskidd]
rhyskidd has joined #panfrost
MistahDarcy has joined #panfrost