bubble_buster has quit [Ping timeout: 245 seconds]
guan has joined #litex
rohitksingh has quit [Ping timeout: 250 seconds]
bubble_buster has joined #litex
mithro has joined #litex
mithro has quit [Excess Flood]
somlo_ has left #litex ["Leaving"]
somlo_ has joined #litex
somlo_ has left #litex ["Leaving"]
somlo has joined #litex
rohitksingh has joined #litex
<somlo>
shoragan: not *my* intention (I didn't set it up) :)
mithro has joined #litex
<shoragan>
somlo, ok. :) if you don't have permissions there, it's probably something for _florent_?
<somlo>
shoragan: history is that pre-built verilog cpu repos used to be part of litex, then got split out, with lots of pythonic "stuff" around them. I maintain the verilog and push occasional updates, but the pythonic "stuff" happens somewhat orthogonally to that. mithro might know
<somlo>
shoragan: that's one of the verilog updates I was talking about
<shoragan>
that seems inconsistent
<somlo>
the update script downloads the current upstream chipsalliance/rocket-chip chisel sources, applies some litex-specific patches, and has chisel build the verilog
<somlo>
the diff between old verilog and current verilog is incomprehensible, and that's just the way it is...
<mithro>
shoragan / somlo: That does look like there is some confusion there
<somlo>
the only useful information in the diff is _upstream.rev, telling you which chipsalliance/rocket-chip commit generated the "old" vs. "new" verilog
<shoragan>
somlo, that the diff is mostly unreadable is understood :)
<shoragan>
mithro, as far as i can see, the "LiteX Robot" commits never touch the verilog
nickoe has quit [Quit: No Ping reply in 180 seconds.]
<shoragan>
my work on the github actions was motivated by making sure that at least the "baseline" configs needed by linux-on-* work reliably
<shoragan>
mithro, nice :) altough with the hdl-containers and github-actions-artifacts it seems even simpler, as you don't need to commit back into the repo
<mithro>
shoragan: Yes, I was also doing this before we had open source tooling
<somlo>
shoragan: I'm looking at a falied ci build, not sure what to make of it; either way, the reason I'd be a bit uncomfortable with automating the verilog generation for rocket is that upstream often breaks things, and I only really want to deal with it when I'm good and ready, not every time CI notices some breakage and starts yelling at me :)
<somlo>
your bad experience with current rocket and whatever ci uses for yosys/trellis/nextpnr notwithstanding :)
<somlo>
I'll test (and push an update) for rocket within a week, that's the plan right now :D
<shoragan>
somlo, ok. currently, though, it seems that the litex integration for rocket-chip in master only works with a different verilog version
<shoragan>
the diff in this case actually looks relevant ;)
<shoragan>
although you get a bitstream with that change, it doesn't boot into the bios, so we didn't make a PR
<shoragan>
without a reproducible, known good version in CI, this is getting somewhat frustrating ;)
<mithro>
My general theory is that if it isn't automatically tested, then it is most certainly broken :-)
<somlo>
shoragan: the high-level problem is that all of rocket, yosys, trellis, nextpnr, and litex itself are fast moving targets, and one could spend a full time job just keeping all their latest versions playing nice with each other
<shoragan>
mithro, yes!
<somlo>
hence the low-pass filter on e.g. the rocket chip verilog and the toolchain, since my main focus is on using them to build something that works in litex :)
<shoragan>
somlo, yes. but in that case i'd at least know that it's not a problem with my local setup
<somlo>
I do want to keep them relatively up to date, but not every week
<shoragan>
somlo, if i could exactly reproduce what you're using, i'd be happy, too
<somlo>
and now that they're gaining at least a small amount of popularity (apparently :) ) -- it's time to 1. update them ASAP (I'm working on that) and 2. figure out how to make it all work for everyone interested (that's not something I have a lot of experience with)
<shoragan>
with the HDL-Containers, the ci job could also pin a specific version of that, until it's time to update
<somlo>
so maybe CI is a good idea
<somlo>
but first let's make sure we at least *start* with a working combination of everything :)
<shoragan>
:)
<shoragan>
actually, my goal is to get a working combination into meta-hdl, as that has an easy way to pin all the dependencies
<somlo>
shoragan: I gave you the toolchain commit IDs I'm using on Fedora. I fully expect to notice your breakage once I update to the latest git upstream -- then I will file bugs with upstream, then maybe they get fixed, then maybe we'll get everything to work, and all that can take a week or two
<shoragan>
but porting SBT into that would be a huge task, so my focus is getting a consistent set into the pythondata-* pacakges
<shoragan>
somlo, i tried with your versions, and then got stuck on a non-booting bios :/ and my debugging-fu at that level is almost non-existant
<shoragan>
ok, i'll try again when you had time to update it :)
<somlo>
shoragan: I understand -- better to have me catch up than you wasting time on debugging with the old toolchain
<somlo>
but me "catching up" takes (me) some effort, so I've historically limited it to once every few months or so; you've just caught me right *before* the next one :)
<shoragan>
:)
<shoragan>
i'm very grateful for your work, thanks
<somlo>
the root issue is that all these are relatively disparate projects run by different groups who are not necessarily concerned with each other's priorities.
<somlo>
my main contribution is to act as the "glue layer" between them :)
<shoragan>
somlo, regressions cause by changes in disparate components is not a new thing for embedded linux developers ;)
<shoragan>
that's why i want to have it all in yocto
<mithro>
shoragan: What is your interest in the rocket version (as opposed to vexriscv)? 64bit RISC-V?
<tpb>
Title: LiteX BuildEnv based on Yocto - Google Docs (at docs.google.com)
<shoragan>
mithro, actually, i want to have both.
<shoragan>
if that works, i want to have a setup for kernelci to catch kernel regressions. but that some way off
<shoragan>
my experiments were based on meta-hdl
<shoragan>
and that largely works (at least i run into the same issues as the github ci runs based on the hdl-containers ;)
<mithro>
shoragan: I would be interested in people looking at working on making gateware generation a first class citizen in yocto
<shoragan>
mithro, as in running sbt? or running litex?
<mithro>
shoragan: All of the above eventually
<shoragan>
sure. first i'd like to have a small collection of working machines (in the yocto sense). first without sbt but from pregenerated verilog. and have that reliably booting into linux
<shoragan>
if that works, pulling out common functionality into bbclasses could be next
<shoragan>
i looked at SBT, but that seems to be a complete packaging system of its own. integrating that as well would be a major effort on its own
<somlo>
shoragan: maybe somewhat related to your interest: I was able to build bitstream for ecp5 from complete scratch on a rv64gc fedora VM in qemu (took 2 days for the whole thing to build)
<shoragan>
one could require it as a preinstalled host-tool, though
<shoragan>
somlo, nice :)
<somlo>
it's the same rv64gc whose root partition I'm trying to boot from on litex/rocket :)
<mithro>
shoragan: Agreed
<shoragan>
somlo, i'm a bit allergic to trying to manually customize a disto image for embedded systems. reproducibility is extremely hard without a build system.
<somlo>
shoragan: I just want a fully self-hosting computer. Right now I can mount and chroot to that filesystem, and run yosys and nextpnr from it (slowly)
<shoragan>
mithro, but even then building vexrisc with sbt failed, because it expects to be run from the full git repo, not from an installed python module :/
<somlo>
but booting the actual OS would make for a more definitive statement :)
<shoragan>
somlo, yocto has enough packages to be self-hosting (and that's tested by the autobuilders)
<shoragan>
mithro, thanks for the google doc link, a large part of what you discussed there is already working
<mithro>
shoragan: If I gave you edit access, could you update the Google Doc?
<shoragan>
(working, as in they need the pythondata fixes ;)(
<mithro>
shoragan: message me a google account and I'll give you edit access
<shoragan>
regarding the mapping, i don't really have anything to add. nathan rossi did all the work before i even looked at it :)
<shoragan>
are there other socs that would be able to boot linux on a ECP5 besides vexrisc and rocket?
<mithro>
shoragan: I used to work on bitbake when OpenZaurus was a thing :-)
<tpb>
Title: Building example designs SymbiFlow examples documentation (at symbiflow-examples.readthedocs.io)
<mithro>
shoragan: In theory power and mor1k architectures should also work
<mithro>
shoragan: for Power you use microwatt
<shoragan>
i only have the ECPIX-5, and would prefer to stay with the open toolchains
<shoragan>
as that simplifies having the full toolchain in OE a lot
<mithro>
shoragan: Only open toolchains above
<mithro>
shoragan: I assume you are in germany?
<shoragan>
yep
<shoragan>
<mithro> shoragan: I used to work on bitbake when OpenZaurus was a thing :-)
<shoragan>
^ i used it a lot at OpenMoko and OpenEXZ before that :)
<mithro>
shoragan: What amazes me is people are still working on buildroot....
<shoragan>
ah, i wasn't aware that the open xilinx tool chain was ready enough yet
<mithro>
shoragan: It's not as polished as the ECP5 + nextpnr flow
<shoragan>
mithro, it's a different niche. much simpler, easy to get started. but also less powerfull
<mithro>
shoragan: I'd be happy to send you an arty if you thought it would help get things tested
<shoragan>
ok. then i think i'll stick with ECP5 for now, until i've got the full chain working (inkl. kernel and userspace)
<shoragan>
mithro, the bottleneck is more time rather than money ;)
<mithro>
shoragan: Yeah - have the same problem, that is why I offer to send people hardware
<shoragan>
if we have a full example for ECP5 in meta-hdl, it should be easier for ppl who are less experinced with OE/Yocto to add support for other socs
<Wolf`>
_florent_: I am doing some of my own work
<shoragan>
so, depth first
<Wolf`>
Quarky93[m]: hai
<Wolf`>
_florent_: Xilinx are underclocking the HBM2 (forcibly - the user is unable to change it as the encrypted HBM IP will not generate settings for its internal PLL to run the memory above 900Mhz.
<Wolf`>
The part, is Samsung Aquabolt HBM2, and it's actually spec'd for 1000Mhz - 1200Mhz (depending on binning)
<geertu>
mithro: Last time I tried microwatt, the yosys from the fpga-toolchain builds didn't have the needed VHDL support
<mithro>
geertu: hdl-containers and the conda environments should do these days -- no idea about YosysHQ/fpga-toolchain -- they are off in their own little world
<Wolf`>
So I'm now working on changing the clock, but... this project has gotten kind of out of control. To control the clock at runtime, I have to poke registers in the APB interface of the HBM IP (or the HBM primitives!) The register details are hidden by Xilinx... so I've been working on reverse engineering all of them.
<Wolf`>
I have enough PLL settings to change the clock, but it's not that easy - you gotta re-train, re-calibrate the MCs, and if you don't wanna lose memory contents, it has to be put in self-refresh mode.
<shoragan>
mithro, with the extended pythondata-cpu-vexriscv_smp, the bitstreams now build and produce artifacts:
<shoragan>
maybe i'll find some spare energy to add a job for the linux/buildroot build as well
<_florent_>
Wolf`: Interesting, I can understand it's not easy to remove the limitations. I just used the Xilinx controller as a black box with LiteX with the FK33, haven't looked closely at the spec, but I can indeed imagine this is quite complex.
<_florent_>
shoragan: Sorry for the Linux-on-LiteX-VexRiscv CI PR, I just try to keep thing simple/minimal on the CI front to try to focus on FPGA dev. I also like the idea of automating bitstream generation but just have a preference for experimenting it externally for now.
kgugala has joined #litex
<shoragan>
_florent_, sure, there are different perspectives. coming from the embedded build system side, i tend to push back against project-specific build automation. :)
<shoragan>
by keeping the CI directly at the CLI level, it's also easy to correlate with what a user would do manually. i find that useful as a way to have "executable documentation"
<shoragan>
I've become a bit opinionated about the need for reproducible CI over the year ;)
<shoragan>
*years
<mithro>
shoragan: Have you see tuttest? https://github.com/antmicro/tuttest ? We use it to test our README instructions still work on symbiflow-examples
<mithro>
s/see/seen/
<shoragan>
mithro, no, hadn't seen it yet. interesting.
<shoragan>
maybe release it on pypi?
<mithro>
shoragan: Log a bug about that?
<shoragan>
mithro, done
<Wolf`>
_florent_: it's worse because the APB registers are undocumented by Xilinx.
<shoragan>
it's a bit like literate programming. :) now the use-case seems obvious and i'm wondering why something like that is not widespread...
_franck_ has joined #litex
indy has quit [Ping timeout: 240 seconds]
indy has joined #litex
pftbest has quit [Remote host closed the connection]
pftbest has joined #litex
pftbest has quit [Remote host closed the connection]
pftbest_ has joined #litex
dcallagh has quit [Ping timeout: 258 seconds]
Quarky93[m] has quit [Ping timeout: 258 seconds]
apolkosnik[m] has quit [Ping timeout: 258 seconds]