<tpb>
Title: apache 2.0 - Placement of copyright and license variables within Python source? - Open Source Stack Exchange (at opensource.stackexchange.com)
<daniellimws>
mithro: Oh if we use that, we don't need to worry about that empty line anymore :P Shall we? Not sure if that's the standard way because I've never seen projects using it
<mithro>
daniellimws: Yeah, I haven't seen that before either
<mithro>
daniellimws: And I've been coding in Python since 1.5.1 days...
<tpb>
Title: __author__ / __copyright__ / __license__ not widely accepted as modern best practice · Issue #162 · pyscaffold/pyscaffold · GitHub (at github.com)
citypw has joined #symbiflow
<daniellimws>
mithro: In the PEP0263 document, "Without encoding comment, Python's parser will assume ASCII text", are we setting the encoding to ascii or utf-8?
<tpb>
Title: Using importlib.metadata Python 3.8.2 documentation (at docs.python.org)
<FFY00>
I don't think they make sense for most projects
<FFY00>
yes, there are people rewriting the python packaging system
<FFY00>
it is being reworked
<FFY00>
let's hope for the better
<FFY00>
but I think pip is pretty good already
<mithro>
FFY00: The big issue is that rebasing causes commits to change hashes making it very hard to talk about things which end up living in branches for a long time before being merged
<FFY00>
at least it's light years away from package managers from other languages :P
<mithro>
FFY00: Good or bad? I could see both arguments ;-)
<FFY00>
good
<FFY00>
mithro sure, use merge commits for that
<mithro>
I think xobs would have disagreement about that :-)
<FFY00>
but using merge commits by default does not make sense imo
<FFY00>
which package manager would he be defending?
<mithro>
FFY00: He is a fan of rust's
<FFY00>
right
<mithro>
daniellimws: Yeah, I think my search has determined we want to include the hash for Python files
<mithro>
Anyway, I think I'm going to have some dinner
<mithro>
Might be make after, might not
<FFY00>
my main issue with cargo is that cargo install will rebuild everything
<FFY00>
which is awful
<FFY00>
and rust right now doesn't have a stable ABI, so no shared libraries, which mean it's a vendoring hell
<FFY00>
*means
<FFY00>
:)
Bertl_oO is now known as Bertl_zZ
<xobs>
FFY00: yeah, rust. pip gives me no end of grief. cargo always seems to work.
<FFY00>
because cargo is very opinionated
<FFY00>
and it is still solving an easy problem
<FFY00>
we'll see when rust starts using shared libraries
<FFY00>
:P
<xobs>
how do you mean?
<FFY00>
fetchind and static linking every dependency is far easier than managing installed dependencies
<FFY00>
like pip does
<FFY00>
when rust gets a stable abi and people start using shared libraries you will see
<xobs>
I'm not sure I follow.
<FFY00>
shared libraries will be installed to the system
<FFY00>
cargo will not to manage them
<FFY00>
install, manage versions, resolve dependencies, etc.
<FFY00>
right now it is only downloading the sources and compiling a single binary
<FFY00>
which is awful, but there is no alternative due to the lack of abi
<FFY00>
when rust gets a stable abi, everything will start to move torwards dynamic libraries
<FFY00>
*towards
<FFY00>
*cargo will NEED to manage them
<FFY00>
sorry
<xobs>
I enjoy the static linking. It's made deploying `wishbone-tool` very easy.
<FFY00>
but it is also very awful
<xobs>
Why?
<FFY00>
duplicated code
<FFY00>
this means you waste *a lot* more space
<FFY00>
is bad for security
<xobs>
The code isn't duplicated, the binary is. And the binary data is small.
<FFY00>
when some library gets a cve, you have duplicated code in 300 different places
<FFY00>
xobs, duplicated binary code
<FFY00>
take a practical example
<FFY00>
electron apps
<FFY00>
they static build and are huge
<FFY00>
xobs, if static building is so great how come it is not standard practice when distributing C libraries?
<FFY00>
all distributions have guidelines against it
<xobs>
FFY00: most C libraries are distributed as source, which usually gives you the option to build a static library.
<FFY00>
no
<FFY00>
they are distributed as shared libraries by the distributions
<FFY00>
you don't compile your source code together with the libraries
<FFY00>
they ship a shared library + headers
<FFY00>
you use the headers for the definitions and link your object against the shared library
<FFY00>
either way, the time will come when rust gets a stable abi
<xobs>
As far as I can tell, distributing static binaries is becoming the norm, what with snap packages and flatpak. Or whatever it's called. I'm not really a Linux desktop user.
<xobs>
All I know is "pip install" has about a 50% chance of actually working.
<FFY00>
snap and flatpak are not the norm
<FFY00>
they are alternatives
<FFY00>
the norm is still the distribution package manager
<xobs>
Usually they assume you have various packages already installed, or that you're running Linux, or that you actually have pip installed.
<xobs>
Whereas "cargo install" is usually enough to get stuff done, or to let someone build a package themselves so they can contribute.
<FFY00>
but because some distributions are so awful with updates snap and flatpak were created
<xobs>
"pip install" is usually met with statements like "oh, you're using it wrong".
<FFY00>
to be used as an alternative in those cases
<FFY00>
xobs, cargo install is enough for now, because rust doesn not support doing things currently yet
<FFY00>
*doing things correctly
<FFY00>
this will change
<xobs>
FFY00: I think we have different definitions of "correctly". The "correct" way has been the source of endless headaches for me.
<FFY00>
no
<FFY00>
that's easy
<FFY00>
not correctly
<FFY00>
it's hard to do things correctly
<FFY00>
expecially when upstreams provide bad tooling
<xobs>
And pip is bad tooling. Apparently it lets you mix `--user` and not `--user`, which results in packages installed to multiple places, which confuses python, which breaks everything, which requires you to reinstall.
<xobs>
But `pip install` is wrong, you need to use virtualenv, or venv, or easyinstall, or setuptools, or whatever it's called now. Because otherwise the whole system gets broken.
<xobs>
Assuming you even have pip. The "embedded" python from python.org doesn't include it. And neither does the debian install.
<FFY00>
nono
<xobs>
So you need to compile python yourself.
<FFY00>
debian gets broken
<xobs>
Or get it from conda, which modifies your bashrc by default.
<xobs>
And conda is a hundred megabytes of extra stuff just for a python interpreter.
<FFY00>
conda is awful
<xobs>
Which has its own method of managing binaries.
<FFY00>
pip is fine, unless you need outdated dependencies
<FFY00>
if you need that then use venv
<xobs>
And then you have to worry about the python abi changing, because that's not stable.
<FFY00>
it is stable
<FFY00>
only changes on major versions
<FFY00>
as expected
<xobs>
You mean minor versions?
<xobs>
E.g. you can't use a 3.6 python standard library with a 3.5 interpreter. Or vice-versa.
<xobs>
Also, the actual stuff that's in the python standard library isn't standard.
<xobs>
For example, the embedded python doesn't include the url parsing library.
<FFY00>
yes, sorry, minor version
<xobs>
And as such, ensure_pip doesn't work.
<FFY00>
embedded python does not implement the full standard
<FFY00>
and I don't thin anyone is claiming that
<xobs>
And I don't see the value with e.g. including eight different boost_1.67 dll files with a program I ship. Very few things will include that particular version of boost. So I might as well statically link it and use lto to remove stuff that isn't used.
<FFY00>
*think
<FFY00>
no
<FFY00>
have that installed on the system
<FFY00>
and don't ship ddls
<FFY00>
*dlls
<FFY00>
that is how things work in linux
<FFY00>
of course on windows there is no such thing
<FFY00>
you need to ship the files
<FFY00>
and then you need a 300gb disk just to have all your programs
<FFY00>
while on linux 60gb is sufficient
<FFY00>
for ane equivalent amount of programs
<FFY00>
also, when there is a cve in boost you need to push an update of your apps
<FFY00>
or people are vunerable
<FFY00>
while on linux you update the system boost\
<FFY00>
instead of updating 200 packages
<FFY00>
same things happens in rust
<FFY00>
since everything is static linked
<xobs>
The nextpnr binary is 110 MB. If I were to remove the boost files, it would shrink by 300k. And I highly doubt a CVE there would affect the actual program.
<xobs>
dynamically linking boost does not help, and just adds headache.
<FFY00>
right, enjoy windows :)
<xobs>
I do!
<xobs>
And you enjoy Linux.
<xobs>
I think it's interesting in that the two approaches are rooted in different beliefs, and that's very evident in what we're both arguing for.
<xobs>
The approach you're advocating is very useful and easy to deal with assuming you have a competent package manager.
<FFY00>
but I agree managing dependencies on windows is a PITA
<xobs>
For example, nix people really advocate that approach because their package manager is amazing.
<FFY00>
but on linux there is no excuse
<xobs>
Whereas for me, static linking simplifies things.
<xobs>
Mac is somewhere in the middle. Bundled frameworks are a thing, but it's just okay.
<xobs>
Static linking is the lowest common denominator, and cargo lets you do that well.
<xobs>
So if you're on a system that has gone all-in on shared modules, then yes, pip and python are great.
<xobs>
If not, then all the arguments you make fall down, and just result in confusion.
az0re has joined #symbiflow
<xobs>
It's probably due to a cultural failure in how software is installed on Windows. But that's the environment I have to deal with.
<xobs>
Being able to host an FPGA workshop and tell everyone, regardless of their platform, to just run this one executable file and they'll be able to interact with a Fomu has been a huge boon. And I don't know that I'd be able to pull that off if wishbone-tool were written in python.
_whitelogger has joined #symbiflow
<sf-slack>
<timo.callahan> @kgugala, I have summarized some comments about the tflite demo. How do you prefer I send them? (i) add each item as a new issue on the git repository, (ii) add them as comments to the issue that I already opened (#11), (iii) Google doc, or, (iv) e-mail them to you. Or something else?
OmniMancer1 has joined #symbiflow
OmniMancer has quit [Ping timeout: 256 seconds]
<OmniMancer1>
I am late to the discussion but note that most of boost is header only libraries so the dynamic/static linking has no effect on updating those, and also that in C++ anything making use of templates will have complications in whether you can just update the shared library
<OmniMancer1>
And Rust's encouragement of generic programming makes use of shared libraries of little value since much code ends up in the dependent binary not in the library and like for C++ complicates the just update the library to get security fixes thing
<sf-slack>
<kgugala> @timo.callahan you can add the comments to existing github issue, or open a new one. If it's something you already solved you can open a PR with the fix
kraiskil has joined #symbiflow
anuejn_ is now known as anuejn
<_whitenotifier-3>
[sphinxcontrib-verilog-diagrams] oharboe opened issue #10: Problems installing the plugin - https://git.io/Jfvlp
az0re has quit [Remote host closed the connection]
acomodi has joined #symbiflow
Bertl_zZ is now known as Bertl
rvalles has joined #symbiflow
rvalles_ has quit [Ping timeout: 265 seconds]
<daniellimws>
mithro: Here the example says "D-Flip flop with combinational logic". Doesn't a d flip flop only have 2 inputs (clock, d) and 1 output (q)? Should I rename it to just "flip flop" without the D in front of it?
kraiskil has quit [Ping timeout: 258 seconds]
kraiskil has joined #symbiflow
bluel has joined #symbiflow
bluel has quit [Remote host closed the connection]
Ultrasauce has quit [Quit: No Ping reply in 180 seconds.]
Ultrasauce has joined #symbiflow
kuldeep has quit [Read error: Connection reset by peer]
kuldeep has joined #symbiflow
kuldeep has quit [Remote host closed the connection]
kuldeep has joined #symbiflow
Bertl is now known as Bertl_oO
FFY00 has quit [Remote host closed the connection]
FFY00 has joined #symbiflow
FFY00 has quit [Remote host closed the connection]
FFY00 has joined #symbiflow
OmniMancer1 has quit [Quit: Leaving.]
gsmecher has joined #symbiflow
<FFY00>
mithro, github actions doesn't work for us
<mithro>
FFY00: Oh? That is sad :-(
<FFY00>
it runs actions in containers
<ZirconiumX>
I don't know if anybody here particularly cares, but with synth_intel_alm merged into Yosys master you can now use it to synthesise bits of gateware
<sf-slack>
<timo.callahan> @kgugala I'll work on a PR
OmniMancer has joined #symbiflow
<sf-slack>
<timo.callahan> @kgugala although here's one thing that I am not sure about and would like some advice: After running "west init ...", the command prints out "=== Initialized. Now run "west update" inside /home/tcal/antmicro/litex-vexriscv-tensorflow-lite-demo.". Is the user supposed to ignore that and instead run what the demo says, "git submodule update --init --recursive" ? Or is the user supposed to run both update
<sf-slack>
commands?
citypw has quit [Ping timeout: 240 seconds]
OmniMancer has quit [Read error: Connection reset by peer]
kraiskil has quit [Ping timeout: 250 seconds]
kraiskil has joined #symbiflow
az0re has joined #symbiflow
<sf-slack>
<kgugala> @timo.callahan you can ignore this step for the LiteX/Vexriscv TF lite demo
<sf-slack>
<kgugala> @timo.callahan what `west updade` does is to fetch third_party HALs
<sf-slack>
<kgugala> we don't need them for LiteX/Vexriscv port
<sf-slack>
<kgugala> and this step takes a lot of time, so we simply skip it :)
<sf-slack>
<timo.callahan> @kgugala thanks!
<mithro>
@ZirconiumX It is my understanding that VtR has a flow which takes vqm?
<mithro>
ZirconiumX: It would be interesting to see if you can do Yosys->VPR here
<ZirconiumX>
mithro: Well, I can try
<ZirconiumX>
Last time you mentioned it, you said it was for Cyclone IV though
<mithro>
@ZirconiumX I say a lot of things and don't remember all of them :-P
<ZirconiumX>
This is running on Cyclone V
<ZirconiumX>
Which is very different to CIV
<mithro>
ZirconiumX: I'm sure the VPR devs would be interested in an architecture model for CV even if it doesn't really match the real hardware
<ZirconiumX>
mithro: I'm fairly confident I can match the hardware; but I've no idea where to begin
<tpb>
Title: GitHub - ZirconiumX/yosys-testrunner: Program for robustly comparing two Yosys versions (at github.com)
<mithro>
ZirconiumX: nope!
<ZirconiumX>
You don't need the exact code, but what this does is use a statistical method to determine which of two versions is superior
<ZirconiumX>
(specifically the sequential probability ratio test)
<ZirconiumX>
And when using something even as simple as "X produces a greater Fmax than Y" as a hypothesis, it turns out that there's a very high signal-to-noise ratio
<ZirconiumX>
So you can fairly easily go down to 0.001% false-positive error
<ZirconiumX>
(the point of using SPRT is that you can bound the false-positive rate)
<ZirconiumX>
(and false-negative rate)
<ZirconiumX>
Additionally the SPRT implemented here is multinomial
<ZirconiumX>
So it can find the "better" one even when you give it a bunch of criteria to work with
<ZirconiumX>
"which one is faster?"
<ZirconiumX>
"which one produces a better area?"
<ZirconiumX>
*faster to route
tmichalak has quit [Ping timeout: 256 seconds]
kgugala has quit [Ping timeout: 250 seconds]
tmichalak has joined #symbiflow
kgugala has joined #symbiflow
<mithro>
ZirconiumX: Sure, working on that loop bit to improve the statistics would be interesting -- I'm no expert around that
<ZirconiumX>
mithro: naturally there's a CPU time/error tradeoff
<mithro>
CPU time isn't a huge concern for me -- developer time is much more important
<ZirconiumX>
The script I wrote is designed to be relatively simple to run and reproducible
<_whitenotifier-3>
[sphinxcontrib-verilog-diagrams] mithro opened issue #11: Add Travis CI - https://git.io/JfvDV
<ZirconiumX>
But since I kinda messed up the numbers, I deleted them (I wasn't using post-PnR numbers but post-synth)
tmichalak has quit [Ping timeout: 264 seconds]
kgugala has quit [Ping timeout: 240 seconds]
tmichalak has joined #symbiflow
kgugala has joined #symbiflow
nonlinear has quit [Read error: Connection reset by peer]
nonlinear1 has joined #symbiflow
az0re has quit [Ping timeout: 240 seconds]
kraiskil has quit [Ping timeout: 256 seconds]
az0re has joined #symbiflow
felix_ has left #symbiflow ["WeeChat 2.3"]
OmniMancer has joined #symbiflow
space_zealot has joined #symbiflow
<sf-slack>
<timo.callahan> @kgugala, I think I need to be given permissions on the Antmicro repo to push my branch (I'm tcal-x).
epony has quit [Read error: Connection reset by peer]
epony has joined #symbiflow
epony has quit [Remote host closed the connection]
<FFY00>
mithro, I just remembered
<FFY00>
msys uses pacman as their package manager
gsmecher has quit [Ping timeout: 256 seconds]
<FFY00>
so with just a little more effort we can also provide msys packages
epony has joined #symbiflow
<FFY00>
and I was thinking, we could use libalpm (pacman backend) to write a custom package manager that works in the user directory and we can use the same arch packages
<FFY00>
shouldn't be too hard, as it's just a frontend
<mithro>
FFY00: I don't want to be in the position of maintaining a totally separate ecosystem, want to try and reuse existing ecosystems --hence why conda is attractive (as it is already heavily used as part of the scientific computing and ML ecosystems)