tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow
<mithro> @daniellimws BTW Where in the world are you located?
Degi_ has joined #symbiflow
gsmecher has quit [Ping timeout: 256 seconds]
Degi has quit [Ping timeout: 240 seconds]
Degi_ is now known as Degi
<_whitenotifier-3> [sphinxcontrib-markdown-symlinks] FFY00 opened issue #8: Add release - https://git.io/JfvUJ
<mithro> @FFY00 - I'm open to the idea of github actions as they seem like a potential future winner
<mithro> @FFY00 - I'm also somewhat surprised that prjtrellis doesn't have any existing CI on it...
<FFY00> great
<FFY00> and yes, there should definitely be a CI :)
<FFY00> I am not really comfortable with the project so I can't help with that
<FFY00> but if someone wants to spend the time explaining me, I am open to do it
<FFY00> mithro, do you want to create a repo to hold the github action? or should I do it and then transfer it to SymbiFlow?
<FFY00> doesn't really matter much
rvalles_ has joined #symbiflow
rvalles has quit [Ping timeout: 256 seconds]
<daniellimws> mithro: Right, missed out some stuff in the end, but all should be good now. I'm in Singapore. You?
futarisIRCcloud has joined #symbiflow
<mithro> daniellimws: Cool! I know a few people in Singapore.
<_whitenotifier-3> [prjtrellis] FFY00 opened issue #135: `PREFIX` is not being respected for `LIBDIR` - https://git.io/Jfvk5
<mithro> daniellimws: Pondering if we should put a hash better the hashbang and the copyright statement in the Python files..
<daniellimws> mithro: between them?
<mithro> @daniellimws: Currently there is an empty newline
<daniellimws> mithro: Yup, should we add? I can't decide, feel that it is makes more sense to separate them since they are different things
<mithro> daniellimws: Yeah, I was thinking that too - just trying to figure out what people generally do
<mithro> daniellimws: When it's too hard to make a decision yourself, copy someone else :-P
<mithro> @daniellimws I just discovered that Python might have __copyright__ and __license__ triple quoted strings...
<tpb> Title: apache 2.0 - Placement of copyright and license variables within Python source? - Open Source Stack Exchange (at opensource.stackexchange.com)
<daniellimws> mithro: Oh if we use that, we don't need to worry about that empty line anymore :P Shall we? Not sure if that's the standard way because I've never seen projects using it
<mithro> daniellimws: Yeah, I haven't seen that before either
<mithro> daniellimws: And I've been coding in Python since 1.5.1 days...
<mithro> daniellimws: I guess we should probably add the file encoding strings after the hashbang in python files too -> https://www.python.org/dev/peps/pep-0263/
<tpb> Title: PEP 263 -- Defining Python Source Code Encodings | Python.org (at www.python.org)
<mithro> Seems like the idea of __copyright__ and __license__ comes from http://web.archive.org/web/20111010053227/http://jaynes.colorado.edu/PythonGuidelines.html#module_formatting
<tpb> Title: Python Coding Guidelines (at web.archive.org)
<daniellimws> Almost a decade
<tpb> Title: __author__ / __copyright__ / __license__ not widely accepted as modern best practice · Issue #162 · pyscaffold/pyscaffold · GitHub (at github.com)
citypw has joined #symbiflow
<daniellimws> mithro: In the PEP0263 document, "Without encoding comment, Python's parser will assume ASCII text", are we setting the encoding to ascii or utf-8?
<mithro> daniellimws: utf-8
<mithro> daniellimws: Looking at https://opensource.google/docs/releasing/preparing/#package -- the suggested tools are addlicense and autogen -- I wonder what they do regarding that newline...
<tpb> Title: Preparing For Release – opensource.google (at opensource.google)
<daniellimws> I can try
<mithro> daniellimws: addlicense doesn't seem to have all that great testdata
<tpb> Title: addlicense/file1.sh at master · google/addlicense · GitHub (at github.com)
<tpb> Title: autogen/bsd3-acme-py-test-prefix.out at master · mbrukman/autogen · GitHub (at github.com)
<tpb> Title: autogen/bsd3-acme-py-test-suffix.out at master · mbrukman/autogen · GitHub (at github.com)
<tpb> Title: Publishing package distribution releases using GitHub Actions CI/CD workflows Python Packaging User Guide (at packaging.python.org)
<daniellimws> mithro: Looks like we should have that hash then
<FFY00> yes
<FFY00> we can trigger that on tag pushes
<FFY00> unless you want to trigger every time
<mithro> For most things, I would trigger every time
<mithro> FFY00: but following the Python Packaging Authority's suggested tooling + workflow is a good way to make the issue someone else's problem :-P
<FFY00> your call
<FFY00> I would push pre-releases every time
<FFY00> and push releases from time to time
<mithro> FFY00: Using https://github.com/pypa/setuptools_scm for our python modules would probably be a good idea too?
<tpb> Title: GitHub - pypa/setuptools_scm: the blessed package to manage your versions by scm tags (at github.com)
<FFY00> that makes things more annoying for packagers
<FFY00> but can be used
<FFY00> the issue is that the github tarballs don't have enough metadata for setuptools_scm to work
<FFY00> but that can be easily hacked around
<mithro> FFY00: And setuptools_scm is likely to be pretty widely used
<FFY00> eh, not really
<FFY00> I mean
<FFY00> it's used by a lot of projects
<FFY00> but I would say at least 95% of the projects just use setuptools
<FFY00> totally unrelated
<FFY00> is there any particular reason you use merge commits
<mithro> FFY00: As appose to?
<tpb> Title: Single-sourcing the package version Python Packaging User Guide (at packaging.python.org)
<FFY00> rebase
<tpb> Title: Commits · libratbag/libratbag · GitHub (at github.com)
<FFY00> you get metadata about the author and committer
<FFY00> and the history is much cleaner
<FFY00> but this is just a nitpick
<mithro> FFY00: There are lots of arguments about which way is better
<FFY00> I would be happy to br convinced :D
<mithro> Looks like in the future it maybe https://docs.python.org/3/library/importlib.metadata.html
<tpb> Title: Using importlib.metadata Python 3.8.2 documentation (at docs.python.org)
<FFY00> I don't think they make sense for most projects
<FFY00> yes, there are people rewriting the python packaging system
<FFY00> it is being reworked
<FFY00> let's hope for the better
<FFY00> but I think pip is pretty good already
<mithro> FFY00: The big issue is that rebasing causes commits to change hashes making it very hard to talk about things which end up living in branches for a long time before being merged
<FFY00> at least it's light years away from package managers from other languages :P
<mithro> FFY00: Good or bad? I could see both arguments ;-)
<FFY00> good
<FFY00> mithro sure, use merge commits for that
<mithro> I think xobs would have disagreement about that :-)
<FFY00> but using merge commits by default does not make sense imo
<FFY00> which package manager would he be defending?
<mithro> FFY00: He is a fan of rust's
<FFY00> right
<mithro> daniellimws: Yeah, I think my search has determined we want to include the hash for Python files
<mithro> Anyway, I think I'm going to have some dinner
<mithro> Might be make after, might not
<FFY00> my main issue with cargo is that cargo install will rebuild everything
<FFY00> which is awful
<FFY00> and rust right now doesn't have a stable ABI, so no shared libraries, which mean it's a vendoring hell
<FFY00> *means
<FFY00> :)
Bertl_oO is now known as Bertl_zZ
<xobs> FFY00: yeah, rust. pip gives me no end of grief. cargo always seems to work.
<FFY00> because cargo is very opinionated
<FFY00> and it is still solving an easy problem
<FFY00> we'll see when rust starts using shared libraries
<FFY00> :P
<xobs> how do you mean?
<FFY00> fetchind and static linking every dependency is far easier than managing installed dependencies
<FFY00> like pip does
<FFY00> when rust gets a stable abi and people start using shared libraries you will see
<xobs> I'm not sure I follow.
<FFY00> shared libraries will be installed to the system
<FFY00> cargo will not to manage them
<FFY00> install, manage versions, resolve dependencies, etc.
<FFY00> right now it is only downloading the sources and compiling a single binary
<FFY00> which is awful, but there is no alternative due to the lack of abi
<FFY00> when rust gets a stable abi, everything will start to move torwards dynamic libraries
<FFY00> *towards
<FFY00> *cargo will NEED to manage them
<FFY00> sorry
<xobs> I enjoy the static linking. It's made deploying `wishbone-tool` very easy.
<FFY00> but it is also very awful
<xobs> Why?
<FFY00> duplicated code
<FFY00> this means you waste *a lot* more space
<FFY00> is bad for security
<xobs> The code isn't duplicated, the binary is. And the binary data is small.
<FFY00> when some library gets a cve, you have duplicated code in 300 different places
<FFY00> xobs, duplicated binary code
<FFY00> take a practical example
<FFY00> electron apps
<FFY00> they static build and are huge
<FFY00> xobs, if static building is so great how come it is not standard practice when distributing C libraries?
<FFY00> all distributions have guidelines against it
<xobs> FFY00: most C libraries are distributed as source, which usually gives you the option to build a static library.
<FFY00> no
<FFY00> they are distributed as shared libraries by the distributions
<FFY00> you don't compile your source code together with the libraries
<FFY00> they ship a shared library + headers
<FFY00> you use the headers for the definitions and link your object against the shared library
<FFY00> either way, the time will come when rust gets a stable abi
<xobs> As far as I can tell, distributing static binaries is becoming the norm, what with snap packages and flatpak. Or whatever it's called. I'm not really a Linux desktop user.
<xobs> All I know is "pip install" has about a 50% chance of actually working.
<FFY00> snap and flatpak are not the norm
<FFY00> they are alternatives
<FFY00> the norm is still the distribution package manager
<xobs> Usually they assume you have various packages already installed, or that you're running Linux, or that you actually have pip installed.
<xobs> Whereas "cargo install" is usually enough to get stuff done, or to let someone build a package themselves so they can contribute.
<FFY00> but because some distributions are so awful with updates snap and flatpak were created
<xobs> "pip install" is usually met with statements like "oh, you're using it wrong".
<FFY00> to be used as an alternative in those cases
<FFY00> xobs, cargo install is enough for now, because rust doesn not support doing things currently yet
<FFY00> *doing things correctly
<FFY00> this will change
<xobs> FFY00: I think we have different definitions of "correctly". The "correct" way has been the source of endless headaches for me.
<FFY00> no
<FFY00> that's easy
<FFY00> not correctly
<FFY00> it's hard to do things correctly
<FFY00> expecially when upstreams provide bad tooling
<xobs> And pip is bad tooling. Apparently it lets you mix `--user` and not `--user`, which results in packages installed to multiple places, which confuses python, which breaks everything, which requires you to reinstall.
<xobs> But `pip install` is wrong, you need to use virtualenv, or venv, or easyinstall, or setuptools, or whatever it's called now. Because otherwise the whole system gets broken.
<xobs> Assuming you even have pip. The "embedded" python from python.org doesn't include it. And neither does the debian install.
<FFY00> nono
<xobs> So you need to compile python yourself.
<FFY00> debian gets broken
<xobs> Or get it from conda, which modifies your bashrc by default.
<xobs> And conda is a hundred megabytes of extra stuff just for a python interpreter.
<FFY00> conda is awful
<xobs> Which has its own method of managing binaries.
<FFY00> pip is fine, unless you need outdated dependencies
<FFY00> if you need that then use venv
<xobs> And then you have to worry about the python abi changing, because that's not stable.
<FFY00> it is stable
<FFY00> only changes on major versions
<FFY00> as expected
<xobs> You mean minor versions?
<xobs> E.g. you can't use a 3.6 python standard library with a 3.5 interpreter. Or vice-versa.
<xobs> Also, the actual stuff that's in the python standard library isn't standard.
<xobs> For example, the embedded python doesn't include the url parsing library.
<FFY00> yes, sorry, minor version
<xobs> And as such, ensure_pip doesn't work.
<FFY00> embedded python does not implement the full standard
<FFY00> and I don't thin anyone is claiming that
<xobs> And I don't see the value with e.g. including eight different boost_1.67 dll files with a program I ship. Very few things will include that particular version of boost. So I might as well statically link it and use lto to remove stuff that isn't used.
<FFY00> *think
<FFY00> no
<FFY00> have that installed on the system
<FFY00> and don't ship ddls
<FFY00> *dlls
<FFY00> that is how things work in linux
<FFY00> of course on windows there is no such thing
<FFY00> you need to ship the files
<FFY00> and then you need a 300gb disk just to have all your programs
<FFY00> while on linux 60gb is sufficient
<FFY00> for ane equivalent amount of programs
<FFY00> also, when there is a cve in boost you need to push an update of your apps
<FFY00> or people are vunerable
<FFY00> while on linux you update the system boost\
<FFY00> instead of updating 200 packages
<FFY00> same things happens in rust
<FFY00> since everything is static linked
<xobs> The nextpnr binary is 110 MB. If I were to remove the boost files, it would shrink by 300k. And I highly doubt a CVE there would affect the actual program.
<xobs> dynamically linking boost does not help, and just adds headache.
<FFY00> right, enjoy windows :)
<xobs> I do!
<xobs> And you enjoy Linux.
<xobs> I think it's interesting in that the two approaches are rooted in different beliefs, and that's very evident in what we're both arguing for.
<xobs> The approach you're advocating is very useful and easy to deal with assuming you have a competent package manager.
<FFY00> but I agree managing dependencies on windows is a PITA
<xobs> For example, nix people really advocate that approach because their package manager is amazing.
<FFY00> but on linux there is no excuse
<xobs> Whereas for me, static linking simplifies things.
<xobs> Mac is somewhere in the middle. Bundled frameworks are a thing, but it's just okay.
<xobs> Static linking is the lowest common denominator, and cargo lets you do that well.
<xobs> So if you're on a system that has gone all-in on shared modules, then yes, pip and python are great.
<xobs> If not, then all the arguments you make fall down, and just result in confusion.
az0re has joined #symbiflow
<xobs> It's probably due to a cultural failure in how software is installed on Windows. But that's the environment I have to deal with.
<xobs> Being able to host an FPGA workshop and tell everyone, regardless of their platform, to just run this one executable file and they'll be able to interact with a Fomu has been a huge boon. And I don't know that I'd be able to pull that off if wishbone-tool were written in python.
_whitelogger has joined #symbiflow
<sf-slack> <timo.callahan> @kgugala, I have summarized some comments about the tflite demo. How do you prefer I send them? (i) add each item as a new issue on the git repository, (ii) add them as comments to the issue that I already opened (#11), (iii) Google doc, or, (iv) e-mail them to you. Or something else?
OmniMancer1 has joined #symbiflow
OmniMancer has quit [Ping timeout: 256 seconds]
<OmniMancer1> I am late to the discussion but note that most of boost is header only libraries so the dynamic/static linking has no effect on updating those, and also that in C++ anything making use of templates will have complications in whether you can just update the shared library
<OmniMancer1> And Rust's encouragement of generic programming makes use of shared libraries of little value since much code ends up in the dependent binary not in the library and like for C++ complicates the just update the library to get security fixes thing
<sf-slack> <kgugala> @timo.callahan you can add the comments to existing github issue, or open a new one. If it's something you already solved you can open a PR with the fix
kraiskil has joined #symbiflow
anuejn_ is now known as anuejn
<_whitenotifier-3> [sphinxcontrib-verilog-diagrams] oharboe opened issue #10: Problems installing the plugin - https://git.io/Jfvlp
az0re has quit [Remote host closed the connection]
acomodi has joined #symbiflow
Bertl_zZ is now known as Bertl
rvalles has joined #symbiflow
rvalles_ has quit [Ping timeout: 265 seconds]
<daniellimws> mithro: Here the example says "D-Flip flop with combinational logic". Doesn't a d flip flop only have 2 inputs (clock, d) and 1 output (q)? Should I rename it to just "flip flop" without the D in front of it?
kraiskil has quit [Ping timeout: 258 seconds]
kraiskil has joined #symbiflow
bluel has joined #symbiflow
bluel has quit [Remote host closed the connection]
Ultrasauce has quit [Quit: No Ping reply in 180 seconds.]
Ultrasauce has joined #symbiflow
kuldeep has quit [Read error: Connection reset by peer]
kuldeep has joined #symbiflow
kuldeep has quit [Remote host closed the connection]
kuldeep has joined #symbiflow
Bertl is now known as Bertl_oO
FFY00 has quit [Remote host closed the connection]
FFY00 has joined #symbiflow
FFY00 has quit [Remote host closed the connection]
FFY00 has joined #symbiflow
OmniMancer1 has quit [Quit: Leaving.]
gsmecher has joined #symbiflow
<FFY00> mithro, github actions doesn't work for us
<mithro> FFY00: Oh? That is sad :-(
<FFY00> it runs actions in containers
<ZirconiumX> I don't know if anybody here particularly cares, but with synth_intel_alm merged into Yosys master you can now use it to synthesise bits of gateware
<FFY00> and I need to pass -v /sys/fs/cgroup/systemd/docker:/sys/fs/cgroup/systemd/docker to the docker arguments
<FFY00> so that we can spawn a systemd container for a clean build
<FFY00> well, it might work when doing it directly in the github action
<FFY00> but it doesn't let us define reusable actions
<FFY00> for some reason all reusable actions run in their own docker container
<FFY00> building arch packages would be as simple as this: https://github.com/FFY00/build-arch-package/blob/master/.github/workflows/test.yml
<tpb> Title: build-arch-package/test.yml at master · FFY00/build-arch-package · GitHub (at github.com)
<FFY00> :/
<sf-slack> <timo.callahan> @kgugala I'll work on a PR
OmniMancer has joined #symbiflow
<sf-slack> <timo.callahan> @kgugala although here's one thing that I am not sure about and would like some advice: After running "west init ...", the command prints out "=== Initialized. Now run "west update" inside /home/tcal/antmicro/litex-vexriscv-tensorflow-lite-demo.". Is the user supposed to ignore that and instead run what the demo says, "git submodule update --init --recursive" ? Or is the user supposed to run both update
<sf-slack> commands?
citypw has quit [Ping timeout: 240 seconds]
OmniMancer has quit [Read error: Connection reset by peer]
kraiskil has quit [Ping timeout: 250 seconds]
kraiskil has joined #symbiflow
az0re has joined #symbiflow
<sf-slack> <kgugala> @timo.callahan you can ignore this step for the LiteX/Vexriscv TF lite demo
<sf-slack> <kgugala> @timo.callahan what `west updade` does is to fetch third_party HALs
<sf-slack> <kgugala> we don't need them for LiteX/Vexriscv port
<sf-slack> <kgugala> and this step takes a lot of time, so we simply skip it :)
<sf-slack> <timo.callahan> @kgugala thanks!
<mithro> @ZirconiumX It is my understanding that VtR has a flow which takes vqm?
<mithro> ZirconiumX: It would be interesting to see if you can do Yosys->VPR here
<ZirconiumX> mithro: Well, I can try
<ZirconiumX> Last time you mentioned it, you said it was for Cyclone IV though
<mithro> @ZirconiumX I say a lot of things and don't remember all of them :-P
<ZirconiumX> This is running on Cyclone V
<ZirconiumX> Which is very different to CIV
<mithro> ZirconiumX: I'm sure the VPR devs would be interested in an architecture model for CV even if it doesn't really match the real hardware
<ZirconiumX> mithro: I'm fairly confident I can match the hardware; but I've no idea where to begin
<ZirconiumX> Plus there's only one of me ^.^
<tpb> Title: Architecture Modeling Verilog-to-Routing 8.1.0-dev documentation (at docs.verilogtorouting.org)
<mithro> ZirconiumX: You might find that VPR has students interested in helping
<ZirconiumX> mithro: Sure, but I can't pay them
<ZirconiumX> mithro: I was curious how Odin II did (since the last time I read a paper on it, Yosys destroyed it)
<ZirconiumX> Crashing when passed no arguments is always a good sign
<mithro> ZirconiumX: ODIN-II is moving towards being a Yosys plugin
<ZirconiumX> Titan is Stratix IV, so yeah, it would need a brand-new architecture definition
<mithro> ODIN-II is about making good trade offs between putting things into hard logic verse soft logic
<mithro> Parsing code was always just a necessary evil
<ZirconiumX> Mmm. Well, anyway
<ZirconiumX> Yosys produces code of rather variable quality
<ZirconiumX> And it's missing some important baseline features
<mithro> ZirconiumX: We are working on trying to improve that at https://docs.google.com/document/d/16L50pyS3RjYStvRKRoWyVac7NoZKyu45fpPQXJqeKYo/edit
<tpb> Title: SymbiFlow FPGA Tool Performance (Xilinx Performance) - Google Docs (at docs.google.com)
<ZirconiumX> mithro: Have you seen https://github.com/ZirconiumX/yosys-testrunner ?
<tpb> Title: GitHub - ZirconiumX/yosys-testrunner: Program for robustly comparing two Yosys versions (at github.com)
<mithro> ZirconiumX: nope!
<ZirconiumX> You don't need the exact code, but what this does is use a statistical method to determine which of two versions is superior
<ZirconiumX> (specifically the sequential probability ratio test)
<ZirconiumX> And when using something even as simple as "X produces a greater Fmax than Y" as a hypothesis, it turns out that there's a very high signal-to-noise ratio
<ZirconiumX> So you can fairly easily go down to 0.001% false-positive error
<ZirconiumX> (the point of using SPRT is that you can bound the false-positive rate)
<ZirconiumX> (and false-negative rate)
<ZirconiumX> Additionally the SPRT implemented here is multinomial
<ZirconiumX> So it can find the "better" one even when you give it a bunch of criteria to work with
<ZirconiumX> "which one is faster?"
<ZirconiumX> "which one produces a better area?"
<ZirconiumX> *faster to route
tmichalak has quit [Ping timeout: 256 seconds]
kgugala has quit [Ping timeout: 250 seconds]
tmichalak has joined #symbiflow
kgugala has joined #symbiflow
<mithro> ZirconiumX: Sure, working on that loop bit to improve the statistics would be interesting -- I'm no expert around that
<ZirconiumX> mithro: naturally there's a CPU time/error tradeoff
<mithro> CPU time isn't a huge concern for me -- developer time is much more important
<ZirconiumX> The script I wrote is designed to be relatively simple to run and reproducible
<_whitenotifier-3> [sphinxcontrib-verilog-diagrams] mithro opened issue #11: Add Travis CI - https://git.io/JfvDV
<ZirconiumX> mithro: https://gist.github.com/ZirconiumX/dcf5f76675658f2d10937f176adc4a6e <-- this is Yosys versus Quartus on a *very* combinational heavy benchmark
<tpb> Title: qvy_chess.txt · GitHub (at gist.github.com)
<mithro> @ZirconiumX those numbers look very close unless I'm mistaken?
<_whitenotifier-3> [sphinxcontrib-verilog-diagrams] mithro opened issue #12: Add better "how to use" documentation - https://git.io/JfvDS
<ZirconiumX> mithro: That's what surprised me most; see how Yosys produces *way* more LUTs, but they pack so well that the end result isn't so bad
<tpb> Title: Revisions · qvy_chess.txt · GitHub (at gist.github.com)
<ZirconiumX> This was the result for attosoc
<ZirconiumX> Where Yosys does a fair bit worse
<ZirconiumX> But since I kinda messed up the numbers, I deleted them (I wasn't using post-PnR numbers but post-synth)
tmichalak has quit [Ping timeout: 264 seconds]
kgugala has quit [Ping timeout: 240 seconds]
tmichalak has joined #symbiflow
kgugala has joined #symbiflow
nonlinear has quit [Read error: Connection reset by peer]
nonlinear1 has joined #symbiflow
az0re has quit [Ping timeout: 240 seconds]
kraiskil has quit [Ping timeout: 256 seconds]
az0re has joined #symbiflow
felix_ has left #symbiflow ["WeeChat 2.3"]
OmniMancer has joined #symbiflow
space_zealot has joined #symbiflow
<sf-slack> <timo.callahan> @kgugala, I think I need to be given permissions on the Antmicro repo to push my branch (I'm tcal-x).
epony has quit [Read error: Connection reset by peer]
epony has joined #symbiflow
epony has quit [Remote host closed the connection]
<FFY00> mithro, I just remembered
<FFY00> msys uses pacman as their package manager
gsmecher has quit [Ping timeout: 256 seconds]
<FFY00> so with just a little more effort we can also provide msys packages
epony has joined #symbiflow
<FFY00> and I was thinking, we could use libalpm (pacman backend) to write a custom package manager that works in the user directory and we can use the same arch packages
<FFY00> shouldn't be too hard, as it's just a frontend
<mithro> FFY00: I don't want to be in the position of maintaining a totally separate ecosystem, want to try and reuse existing ecosystems --hence why conda is attractive (as it is already heavily used as part of the scientific computing and ML ecosystems)
<tpb> Title: pyalpm/pycman at master · archlinux/pyalpm · GitHub (at github.com)
<FFY00> we would just be maintaining a front end for libalpm with custom handling of paths
<FFY00> as you can see it's not that much work
<FFY00> and I don't know where to draw the line of ecosystem
<FFY00> but I do get your point