ChanServ changed the topic of #zig to: zig programming language | | be excellent to each other | channel logs:
swills has quit [Ping timeout: 240 seconds]
<noam> Amusingly, since Netsurf prepends `NetSurf - ` to the window title, it showed up as "NetSurf - The Modern Packager's Security Nightmare" :P
ky0ko has joined #zig
ur5us_ has joined #zig
mokafolio has quit [Quit: Bye Bye!]
swills has joined #zig
mokafolio has joined #zig
mokafolio has quit [Client Quit]
mokafolio has joined #zig
mokafolio has quit [Client Quit]
mokafolio has joined #zig
mokafolio has quit [Client Quit]
mokafolio has joined #zig
mokafolio has quit [Client Quit]
mokafolio has joined #zig
jmiven has quit [Quit: reboot]
jmiven has joined #zig
moka has joined #zig
moka has quit [Client Quit]
moka has joined #zig
mokafolio has quit [Ping timeout: 260 seconds]
moka is now known as mokafolio
swills has quit [Ping timeout: 260 seconds]
<andrewrk> recovery cases fixed; now working on auto rewriting inline fn to callconv(.Inline)
<g-w1> will these auto-translation things be removed at 1.0?
<andrewrk> we've already had dozens of auto translations that are gone already; typically we leave them in for 1 release cycle
<andrewrk> just makes things a bit easier on people who track master branch
hnOsmium0001 has joined #zig
<g-w1> makes sense
ur5us has joined #zig
ur5us_ has quit [Ping timeout: 264 seconds]
<andrewrk> done. going for a run before deciding my next task
<andrewrk> 41 more test cases to go
<noam> Enjoy your run :)
swills has joined #zig
reductum has joined #zig
<v0idify> andrewrk, why would it RIP ZLS, would another implementation need to be made?
<andrewrk> ZLS will be broken until it is updated because it depends on zig's AST API
<andrewrk> I'm being dramatic. it will require a few days of work, that's all
ur5us_ has joined #zig
ur5us has quit [Ping timeout: 272 seconds]
fengh has joined #zig
brzg has joined #zig
gazler__ has joined #zig
gazler_ has quit [Ping timeout: 272 seconds]
bitmapper has quit [Quit: Connection closed for inactivity]
leon-p has quit [Quit: leaving]
brzg has quit [Quit: leaving]
<v0idify> is there a magic thing zig will do that will make both program authors and packagers happy? /s but is there something interesting proposed?
<mikdusan> frankly in my experience packagers aren't ready for bonafide needs of projects when they differ from the general desire to share deps across packages
osa1 has quit [Ping timeout: 264 seconds]
<daurnimator> pinning only works for *application* development; not for library development
<daurnimator> however most applications are a fraction of the size of the libraries they use
reductum has quit [Quit: WeeChat 3.0]
eax has quit [Ping timeout: 268 seconds]
xackus_ has joined #zig
xackus has quit [Ping timeout: 264 seconds]
g-w1 has quit [Quit: WeeChat 2.7.1]
jacob3 has joined #zig
jacob3 is now known as g-w1
g-w1 has quit [Client Quit]
jacob3 has joined #zig
v0idify has quit [Remote host closed the connection]
v0idify has joined #zig
mokafolio has quit [Ping timeout: 264 seconds]
g-w1 has joined #zig
jacob3 has quit [Quit: WeeChat 2.7.1]
dyeplexer has joined #zig
jacob3 has joined #zig
jacob3 has quit [Client Quit]
bitmapper has joined #zig
ur5us_ has quit [Ping timeout: 240 seconds]
<marler8997> candidate for Zig Zen: "always fails is better than sometimes works"
<marler8997> something I codified this morning as I tried to analyze some of my programming sensibilities
<daurnimator> marler8997: ehhhhhhh. that ends up being a recipe for bad portability
<g-w1> runtime crashes are better than bugs?
<g-w1> its kind of the same thing
<marler8997> daurnimator, the opposite actually
<daurnimator> marler8997: e.g. have a look at the issues around OpenBSD not having a good .openSelfExe
<marler8997> g-w1, yes, I think it's a more general version of that
<daurnimator> marler8997: we had to select a "sometimes works" rather than a "robust"
<g-w1> maybe it should replace it
<daurnimator> seems to be the relevant issue
<daurnimator> though I remember griping in the issue/pull request that added the workaround in the first place
<marler8997> I like "runtime crashes are better than bugs", these 2 statements might be more different in some ways
<daurnimator> "comptime errors > runtime crashes > bugs "
<marler8997> "sometimes works" is where bugs live
<daurnimator> marler8997: "sometimes works" is the opposite of robust, which is in the language tag line
<marler8997> exactly
<marler8997> everyone can agree "sometimes works" is bad
<marler8997> what I think is insightful is that its better to "always fail" than "sometimes work"
<daurnimator> marler8997: except when the alternative is doesn't work....
<marler8997> it's saying, don't release a feature half-baked, either implement it robustly, or don't implement it at all
<daurnimator> e.g. its better to have stack tracebacks *most of the time*, then never
<marler8997> daurnimator I would agree with that
<marler8997> so maybe then the definition of "works" needs modification
<daurnimator> marler8997: you can also get into tricky definitions of "works" -> e.g. what if something is O(n^2) on one platform and O(1) on another?
<marler8997> there's some sort of distinction between "partially implemented" and "buggy code that fails catastrophically"
<andrewrk> @panic("TODO") vs a bug at runtime
<marler8997> yeah that's a good description
<marler8997> looking back it looks like it's basically what daurnimator said "runtime crashes > bugs"
<daurnimator> marler8997: which is the whole philsophy of segfaults
<marler8997> oh yeah that's true
<daurnimator> marler8997: exit with "segmentation fault" is better than "run random data as code"
<daurnimator> its one of the lessons I used to instill in my programming lessons
<marler8997> Maybe providing examples of our Zig Zen points somewhere would be beneficial
<marler8997> examples like the one you just mentioned
<g-w1> it could be in the zig zen section in the docs/langref
<marler8997> yeah that seems like a good place
<marler8997> I think my original statement of "sometimes works" was actually supposed to be "has bugs"
<marler8997> no implementation is better than buggy implementation ?
midgard_ has quit [Read error: Connection reset by peer]
<daurnimator> marler8997: could I hear you attempt to apply that rule to ?
<siraben> daurnimator: Nix address many problems raised in that article
midgard has joined #zig
<siraben> (the one about package managers)
<marler8997> I'm not sure how it applies
<marler8997> OpenBSD doesn't support getting it's own exe, not sure the relation
<marler8997> maybe the rule would state, it's better to not implement the selExePath feature on OpenBSD unless it can be implemented without bugs?
<siraben> daurnimator: take for instance the issue of pinning dependencies. In many package managers the dependency specification is purely nominal (by name) and inexact, so anything with the right name and version could satisfy the dependency spec.
<daurnimator> marler8997: the implementation we have now does have potential bugs: it's a hacky work around. but without it we wouldn't have 1. stack traces on openbsd. 2. be able to distribute binaries.
<marler8997> yeah the Nix phdthesis mentions that exact issue
<siraben> right i was about to quote it, hehe
<siraben> it's actually very readable for a PhD thesis
<siraben> Nix OTOH uses a hash of the dependency to remove any ambiguity, so a package that depends on GNU Hello 1.0 with patches will be different from a package that depends on GNU Hello without patches
<daurnimator> siraben: its not though: the whole article is fighting against the trend in package managers (go, rust, recent python, node) of pinning dependencies
<marler8997> daurnimator, we already clarified that "sometimes works" != "has bugs"
<marler8997> so having a implementation of selfExePath that works some of time, without bugs, wouldn't apply to the rule
<daurnimator> marler8997: this isn't a "sometimes works" though: its a "has bugs when used in combination with a few other features that's undectable; and we're sort of just luck we don't use right now"
<marler8997> oo that's a tricky one
<marler8997> this one might be an exception to the rule then
<daurnimator> marler8997: the implementation we use right now for selfExePath on OpenBSD only works if: 1. you are an application not a library. 2. you don't modify arg0. 3. no one has called chroot. 4. no one has moved or replaced the running executable. 5. you haven't modified PATH in the current process..... etc.
<marler8997> yeah that sounds super ugly, but sometimes ugly is necessary
<marler8997> unfortunately
<marler8997> that really leaves us with no choice
ylambda has quit [Remote host closed the connection]
<marler8997> the rule of no implementation > buggy implementation wouldn't apply if 1) a bug-free implementation is impossible and 2) having any implementation at all provides good benefit
<daurnimator> won't *everyone* say they fit into 2) ?
<g-w1> why not just @compileError if on openbsd?
<marler8997> daurnimator, I'm not sure what you mean by "won't everyone say"?
<daurnimator> g-w1: because it was decided that a buggy implementation is better than none..
<marler8997> This is just a general guidance when writing code
<g-w1> ok
<marler8997> it's saying, if you're about to implement something that you know will have bugs, maybe just @panic() for now and come back to it later
<daurnimator> marler8997: what's the point of the zig zen if not something to point at when reviewing someone else's code?
<marler8997> did I say we shouldn't point to it when reviewing code?
<daurnimator> and I'm sayin that the inevitable reply from the code author *every* time you point to that item of the zen will be "but having any implementation at all provides good benefit"
<marler8997> if it's a partial implementation I would agree
<marler8997> if it's a buggy implementation, I think I would disagree
<marler8997> unless again, the other 2 criteria apply
<marler8997> but I'm glad that you are taking issue with the rule, I think it shows that the rule in and of itself is not obvious
<marler8997> it's codifies a programming technique that I've learned over the years and I think makes my code better overall
<siraben> daurnimator: in any case I agree with the article's claim that bundling dependencies is problematic. When upstream tries to be too clever about things usually causes the most headaches
<daurnimator> siraben: its more about the issues of pinning dependencies (of which bundling is one form)
<marler8997> If I understand correctly, in the article he said "pinning dependencies" is problematic because it makes updating them more difficult
<siraben> Why wouldn't pinning dependencies make sense? If you allow slightly different dependencies to be used in a piece of software the results may change
<marler8997> if that's the case, then this is not an issue for nix, because updating a package in your nixpkgs file updates it for everyone (by default)
<siraben> Right.
<siraben> marler8997: but for Rust/Go packages aren't they pinned by the project's lockfile?
<marler8997> not sure on that one
<daurnimator> siraben: that's the whole point! if a library the application uses has a bugfix; I have to wait for the application to update its lockfile before I can get the fix
<siraben> with Nix's Hydra build farm it also lets one build all packages that depend on an updated library and check for regressions
<daurnimator> when you get to dependency trees 5 layers deep, it can be a decade to get a bugfix
<marler8997> daurnimator, yeah that's an issue with "lockfile dependencies", but this doesn't apply to nigpkgs
<daurnimator> or not even a bugfix, a performance fix would have the same problem
<daurnimator> marler8997: why not?
<siraben> marler8997: " buildRustPackage requires either the cargoSha256 or the cargoHash attribute which is computed over all crate sources of this package. cargoHash256 is used for traditional Nix SHA-256 hashes, such as the one in the example above. cargoHash should instead be used for SRI hashes. For example: "
<siraben> it applies to Rust packages for instance
<marler8997> > if that's the case, then this is not an issue for nix, because updating a package in your nixpkgs file updates it for everyone (by default)
<siraben> marler8997: say I update SDL2 in Nixpkgs, then every package that depends on it (directly or transitively) gets a rebuild.
<siraben> But if a package bundles SDL2 with it, well it doesn't get rebuilt.
<siraben> Oops wanted to ping daurnimator
<daurnimator> siraben: how does nix handle a go application like, uh (trying to pick a well known one... chezmoi?) that has a go.mod and go.sum file?
<siraben> daurnimator: latest version bump of chezmoi:
<siraben> vendorSha256 is dependent on chezmoi's lock file
<siraben> So it's up to chezmoi to update their pinned versions
<daurnimator> siraben: so does `buildGoModule` ignore go.mod/go.sum?
<siraben> it uses the go.mod file
<daurnimator> siraben: right that's the whole problem!
<siraben> Is the issue that say OpenSSL has a vuln then Go packages won't have that issue resolved?
<siraben> whereas all the C or C++ packages that explicitly have OpenSSL as as dependency would build against the newer version?
<siraben> Hm.
<daurnimator> siraben: imagine e.g. (picking a random dep here) go-vfs 2 has a bugfix (
<daurnimator> siraben: how do I (as a nix user) make sure i get that bug fix?
<marler8997> nix package override?
<siraben> you will have to apply a patch to go.mod and update vendorSha256 accordingly
<marler8997> and/or, submit a PR to nixpkgs to apply it
<siraben> via an overlay, say.
<daurnimator> the end user has to do it? or the package maintainer?
<siraben> How much of a security concern is this? I understand the threat but how would you actually exploit an insecure dependency of a Go program?
<daurnimator> siraben: I didn't say security. I said bug fix. hell maybe its a 10x performance improvement.
<siraben> Shouldn't that be up to upstream?
<marler8997> I'm not seeing the problem, daurnimator are you saying that package managers should take updates from every single tool/project they reference automatically?
<siraben> daurnimator: end user could apply the patch immediately while waiting for the change to be merged in Nixpkgs, say.
<daurnimator> siraben: I don't think so: why would you expect an application to have a release that does nothing but e.g. bump pinned dependencies?
<daurnimator> not to mention that for a deep depdency tree
<marler8997> oh we're talking about indirect dependencies, not direct dependencies
<siraben> It's like you want lockfiles but without the locking part, which is the point
<daurnimator> no. I *don't want lockfiles*
<marler8997> daurnimator, my guess is it's not using go.mod
<siraben> But why not? You lose reproducible builds!
<daurnimator> siraben: not if I build with the same dependencies: which I could select. but default should be newest releases of dependencies
<daurnimator> also if you're using dynamic linking you wouldn't lose reproducible builds either; but that's beside the point
fengh has quit [Ping timeout: 260 seconds]
<marler8997> If your philosophy is "I always want the latest version of all my dependnecies, even if they have not been verified", then that's just a fundamental difference
<daurnimator> marler8997: what is "verified"
<marler8997> in nix land, verified means it has passed CI
<siraben> then if you build your package now and in 6 months, by following the latest releases you could break your package
<marler8997> the nixpkgs CI to be specific
<daurnimator> okay so bringing it back to a nixpkgs context: every go dep should have its own nixpkg
<siraben> marler8997: in building Python packages for Nixpkgs you have to specify all the python inputs actually, so it would follow the "latest" version (latest in nixpkgs)
<daurnimator> and then you would depend on that in whatever the way you depend on packages is
<siraben> I suppose we could make it so that the entirety of all go modules is added as a package set in Nixpkgs and say to hell with lockfiles and make package maintainers to specify them as inputs
<marler8997> I'm not sure what nixpkgs does with go
<marler8997> but siraben I believe you're correct about python packages
<siraben> Looks like we don't have a goPackages set, but we have haskellPackages, python3Packages etc.
<marler8997> every python package has it's own deriviation, so updating it updates it for everyone
<siraben> but from a packager POV, doing that for python is more painful than just obtaining the vendorSha256 hash and being done with it.
<marler8997> I can see nix deciding to just leverage what Go already has, if Go wants to change it, it can change it
<daurnimator> marler8997: I think Go in particular has made it impractical to do "the right thing"
<marler8997> right that's kinda what it sounds like
<daurnimator> and I hope zig doesn't go the same way
<siraben> I need to try out sometime, which takes advantage of a poetry.lock file to generate the expression for Nixpkgs
<marler8997> I think if Go had a way to easily configure/override go.mod, then maybe that would be enough?
<daurnimator> marler8997: I think the pattern of having a lockfile in version control is poisoness
<daurnimator> *poisonous
<siraben> marler8997: sounds like it. The only way I think of to override go.mod is to done some sed wizardry
<daurnimator> attach it to a release some other way
<marler8997> daurnimator what do you suggest?
<siraben> daurnimator: I have to disagree. I've had to deal with minor version bumps of dependencies that broke API use, despite being a minor version bump
<siraben> so necessitates lockfiles
mokafolio has joined #zig
<siraben> then I run a tool to check for outdated deps and update my version contraints accordingly
<marler8997> if you have a better solution than lockfiles, that means my build doesn't break I'd love to hear it
<siraben> e.g. to find outdated packages for dart
<siraben> Maybe it also suffices to have a way to override a dependency of a dependency? Then that would satisfy daurnimator's goal of making sure a program uses the latest version of Y library
<marler8997> right, overriding go.mod would need to be recursive
osa1 has joined #zig
<marler8997> but if Zig encapsulated every go package in a derivation, it could do that
<siraben> In which case might be impossible to do with Nix because you can only touch the go.mod file really
<marler8997> sorry, I mean Nix (not Zig)
<siraben> Yes
<siraben> the pythonPackages set is probably a prime example of this, since Python packaging is so awful and inexact
<marler8997> yeah, they could do the same with Go if they were so inclined
<marler8997> but best not to encourage them to make Go better right? :)
<marler8997> just leave Go how it is, then we can use this article as a reason not to use Go. Zig Stonks!
<siraben> zigPackages package set when?? :P
<siraben> Unfortunately Nixpkgs' Zig is broken on darwin because we're still using the 10.12 macOS SDK
Ristovski has quit [Ping timeout: 260 seconds]
<marler8997> is 10.12 one of the versions Apple no longer supports?
[Ristovski] has joined #zig
<siraben> it's already EOL'd and 10.13 will be EOL'd soon
<marler8997> shoot
<siraben> we really should update it soon because
<siraben> > Go 1.16 is the last release that will run on macOS 10.12 Sierra. Go 1.17 will require macOS 10.13 High Sierra or later.
leon-p has joined #zig
waleee-cl has quit [Quit: Connection closed for inactivity]
_whitelogger has joined #zig
sord937 has joined #zig
fengh has joined #zig
knebulae has quit [Ping timeout: 265 seconds]
bipbopbee has quit [Ping timeout: 264 seconds]
bitmapper has quit [Quit: Connection closed for inactivity]
lqd has joined #zig
<noam> siraben: upstreams misusing versioning is not a legitimate reason to subvert the *entire* packaging ecosystem
<noam> While I largely disagree with the article's stance on static linking, the points on dependency pinning are absolutely correct - as bad as heartbleed was, it'd have been a million times worse if people had pinned the old versions.
<noam> imagine if a web browser pinned the SSL library they were using
<noam> I'd bet all the money i've ever conceived of that it would've directly affected tens of thousands of people minimum before it was addressed who would have been unaffected if not for pinning
<noam> The fact that security implications of other software is less blatant does not mean that they aren't real
<noam> That said, breaking changes should *never* be a minor version bump.
<siraben> So, make it easy to override dependencies of a package?
<siraben> They never should be, but I've seen enough breakage from minor version bumps.
<noam> Sure, but is that really *more* of a concern than the effects of pinning?
<noam> Breakage with bad versioning usually means failed builds, which makes the problem obvious and forces people to address it.
<noam> Dependency pinning *hides* the issue, but does not solve it
<noam> Which means people will be silently running insecure software without knowing it
<siraben> Well, in Nixpkgs everything that declares openssl as a dependency will be rebuilt when openssl is updated. However Go/Rust programs which use lockfiles will not (I think)
<noam> That's the problem.
<noam> The former is fine
<noam> The latter is *not*
<siraben> Sure, but what's the proposed solution?
<noam> S
<noam> "Stop using it."
<siraben> Dunno how Arch/Debian deals with this
<noam> Dependency pinning causes far more issues than it solves
<siraben> FWIW one can also pin the Nixpkgs revision because of its monorepo nature, heh
<noam> Personally, I'd blacklist any software using it blindly from any repo I ran
<noam> That's not an improvement lol
<siraben> nah it was just an example
<noam> There *are* legitimate use cases for dependency pinning, but "fight against upstreams" isn't one of them
<noam> Fighting upstreams is the Go / Rust model.
<siraben> I don't think stop using it is an adequate solution
<noam> Basically, instead of working with upstreams and getting them to stop making breaking changes without bumping major version, Go / Rust projects say "assume all changes are breaking and don't update without explicit action"
<siraben> Being able to override dependencies sounds much more composable
<noam> which is 1000x worse.
<noam> It's a failed attempt to solve a social issue with a technical resolution.
<noam> The issue that you mention - breaking changes without major version bumps - is not truly a technical issue.
<noam> It's a *social issue*.
<siraben> I realize.
<noam> Technical solutions to social problems always make things worse. 100% of the time.
<siraben> When updating a lockfile for a Flutter project a package bumped from 1.6.3 to 1.6.4 and broke the build, WTF?
<noam> (Note: social problems here does not mean problems affecting a society, but problems arising from the interaction of people)
<noam> (I'm not saying technology can't help with e.g. hunger, but that technology can't solve interpersonal issues)
<siraben> Maybe a packaging system/programming language should force semantic versioning to be bumped if the public API changes
<noam> siraben: so you either a) contact upstream and get them to fix it or b) stop using that upstream
<noam> if you can't trust the upstream to work with you, then why would you use it in the first place?
<noam> an upstream that is hostile towards you is not one you should rely on
<siraben> That's the case for a lot of enterprise software unfortunately
<siraben> and the enterprise solution to this (according to a friend who works in it) by distributing entire VMs is actually pretty common
<siraben> lol
<noam> So... break the entire package management ecosystem... to accomodate what is effectively malware? (Software which is hostile towards its users - who are in this case devs)
<siraben> Who said what's malware?
<noam> > siraben: so you either a) contact upstream and get them to fix it or b) stop using that upstream
<noam> > if you can't trust the upstream to work with you, then why would you use it in the first place?
<noam> If the upstream is acting in ways hostile towards it consumers, then it isn't trustworthy.
<siraben> There's plenty of situations where upstream is unmodifiable but one cannot do anything about it. So the reaction seems to be to use the exact same dependencies as upstream did in the first place.
<semarie> "public API changes" is complex. one part is on dev side (size of its struct) but another part could be in OS where it is compiled (if software uses time_t and OS changed time_t from 32bits to 64bits for example)
<noam> Structure layouts and sizes isn't API, it's ABI.
<noam> API changes don't break rebuilds
<semarie> ah yes
<noam> With API changes, your'e recompiling the library and its dependents
<noam> The issue would be attempting an ABI change without a rebuild
<siraben> I don't have a good solution for this, but making entire build graphs explicit and making overrides composable seem like good steps in the right direction
<siraben> and making dependency specifications exact
<noam> I disagree - they make things *far* wore.
<noam> worse*
<siraben> which part?
<noam> All of it!
<noam> because now people have an excuse!
<siraben> Did you see the discussion before that updating openssl will cause dependants to rebuild?
<noam> Made a change that broke all your dependents? That's their problem!
<siraben> > There's plenty of situations where upstream is unmodifiable but one cannot do anything about it. So the reaction seems to be to use the exact same dependencies as upstream did in the first place.
nikita` has joined #zig
<noam> yeah, that's awful
<siraben> It is.
<noam> That's not just not good, it's actively making things worse
cole-h has quit [Ping timeout: 264 seconds]
<noam> Dependency pinning lends justification to it without solving it.
<noam> Or, in a different perspective, it solves that issue while creating far worse ones.
<noam> For the sake of exaggerated example, imagine someone pinned Windows XP as a dep.
nikita` has left #zig [#zig]
<noam> It's been EOL for a while now
<noam> but unless the author is *active*, there are actual serious security threats to all of your users and your dependents users, recursively up the graph.
<siraben> IME on large projects with lockfiles, automation is good in making sure dependencies are as up to date as possible.
<noam> Dependency pinning means that old software continues to *build*, for instance - it doesn't make the old software safe to use.
<noam> Except now you're wasting a metric fudgeton of energy just to test compatibility.
<noam> Per project, per update of dependencies.
<siraben> if the issue is security, how vulnerable is the resulting binary really?
<noam> Very!
<siraben> > Dependency pinning means that old software continues to *build*, for instance - it doesn't make the old software safe to use.
<siraben> yeah, definitely
<noam> If something depends on, say, libgoffice, and there's a security bug allowing trivial construction of malicious documents, then anyone with an outdated version has a very real problem
<siraben> even if the application doesn't expose that part of the API?
<noam> It's not about API!
<noam> If I use libsvg to open an SVG, and I use an old version with a security bug, and I rarely bother updating my dependencies, now all of my users are at risk when they open SVGs
<siraben> Yeah
<noam> (and this isn't purely hypothetical; malicious execution while manipulating complex file formats is probably one of the most common attack vectors)
<noam> I wrote a library that abstracts libsvg, libpng, etc? Congrats. Now all my dependents are vulnerable too
<noam> Dependency pinning means that unless *every single person* is responsible, you're totally and utterly screwed.
<siraben> So... override libsvg/libpng until upstream fixes it?
<noam> All it takes is *one person* in your dependency chain to be a bit lazy and you could lose *everything*
<noam> No, don't pin it in the first place!
<siraben> yeah overriding is hard/impossible when there's a lockfile
<noam> instead, make sure that upstreams don't make breaking changes without bumping major versions - and that security fixes get backported
<noam> Importantly, this reduces the number of people who have to be responsible to *one*
<noam> If even a single dependency pins an old libsvg version, *they* need to be responsible. Without pinning, and with proper social action to ensure people are responsible with versioning? *only* libsvg maintainers need to be responsible.
<siraben> I just did a quick search for overriding Rust package dependencies, looks like it's possible
<noam> Possible doesn't mean it's going to happen
<siraben> cc marler8997 ^
<noam> and you still need far more people to be responsible here
<siraben> well, this is why we have security audits
<noam> ... really.
<noam> What percent of the packages in, say, Gentoo, do you think have been audited *ever*?
<noam> I'm willing to bet it's below 1.
<noam> At *best*.
<noam> How about in the last year? If it's even 0.1% I'll be shocked.
<noam> Let's make it more fun. What percent of, say, LibreOffice do you think has been audited?
<siraben> Well, organizations run security audits of their servers
<siraben> of course the only real solutions to get people to 1. actually respect semantic versioning and 2. take responsibility
<noam> Sure, but dependency pinning fights both of those.
<noam> With dependency pinning, who cares if versioning is wrong? not like anyone's going to expect it to *work*
<siraben> yes!
<noam> and who cares about responsibility? It's the user's problem, not upstream's.
<siraben> It's the user's problem to patch their software?
<noam> Well - s/user/downstream/
<noam> If a depends on b, it's not *b*'s problem if it broke; a should have pinned the old version!!
<noam> The biggest problem with dependency pinning is social, IMO
<noam> It propagates horrible and harmful ideas on how projects should be interacting
craigo_ has joined #zig
<noam> If you're making a project which other people depend on, you have a *responsibility* not to break them
<noam> That's not optional.
<siraben> Though, the article's places a lot of emphasis on eliminating vulns in production systems, I wonder how much it actually occurs in practice
<siraben> haven't managed prod before so shrug
<noam> I suspect at least an order of magnitude more often when pinning is involved.
<noam> Simply because you only need *one* lazy jerk in your dependency chain to cause problems
<noam> (Whereas without pinning, distros will often patch problems before upstream does)
<daurnimator> siraben: from experience? it happens a lot
<daurnimator> and putting on my packager hat... we've essentially declared forfeit for go, rust and node
<daurnimator> python is barely hanging on
<siraben> daurnimator: dependency-related vulns in prod happen a lot?
<daurnimator> siraben: indeed. though real life attackers are few. usually its 7 days hunting down a bug causing an outage that turns out to be fixed in a dependency of a dependency, but pinning hasn't made it up to us yet
<noam> In other words, the vulns are real and a serious problem but nobody cares (including attackers)?
<daurnimator> yep
<noam> I mean, it makes sense
<siraben> daurnimator: so how long until the pins update all the way?
<noam> There's probably easier methods of attack - though, as someone concerned with security, that's hardly reassuring
<daurnimator> siraben: takes between a month and 2 years to make it through each level. cross your fingers that its shallow...
<siraben> yikes
<noam> ... freaking heck
<noam> See I've been arguing it's bad but I didn't expect *that* bad
<noam> I was thinking a few days to a week per level...
<siraben> daurnimator: is that a serious problem? or are there other low-hanging fruits to deal with?
<daurnimator> noam: its worse when its a service.... e.g. there's a bug in a go library that was finally fixed 3 years ago, it finally got incorporated into kuberenetes 1.20 a year or so ago. now we're waiting for amazon EKS to support EKS 1.20... probably 6 months left
<noam> ffs that's insane!
<karchnu> that's why we should have pledge(2) and unveil(2) for the linux kernel, too
<noam> Forget the *security* aspect for a bit, that's like living with all the bad parts of Debian stable and none of the good bits!
<daurnimator> siraben: its a serious waste of time: spending week after week chasing bugs that have already been fixed... just to figure out which commits to cherry pick
<noam> karchnu: what are those?
<daurnimator> noam: they're openbsd's syscall filtering
<karchnu> noam: security syscalls in the openbsd kernel
<noam> ...huh.
<siraben> Yeah I've heard of pledge, very nice
<siraben> didn't know the Linux kernel had it too
<noam> What does it do? Change syscall behavior?
<noam> siraben: it doesn't
<siraben> oh oops, "should have"
<noam> > that's why [Linux] should have it too
<siraben> lol
<daurnimator> noam: you pledge to not use certain syscalls. if you try it kills the process
<daurnimator> very implementable with existing linux syscall filtering
<noam> Neat
<karchnu> and unveil is kinda the same for accessing different paths
<siraben> daurnimator: does pinning provide more or less benefits than the costs?
<karchnu> but it doesn't kill your program, just tells you that the path isn't available/doesn't exist
<siraben> imagine if you unpinned a production service
<noam> siraben: yeah, imagine being able to update a service within a day of it being fixed upstream instead of three years...
<noam> What a horrifying concept!
<siraben> well, it's more complicated than that.
<noam> Sure, but that's a serious downside
<noam> For a production service the security issues are actually critical
<noam> I wouldn't *dare* run a public-facing service that pinned deps
<siraben> have you managed a production service before?
<siraben> I know little about managing prod
<noam> Depends on how you define production
<noam> (Read: no)
<siraben> maybe people here have seen this already:
<noam> It's on medium, so I'm betting I'll scoff at it. That said, *opens*
<siraben> tldr: some guy hacked into dozens of companies by uploading malicious packages to npm knowing that it'll be downloaded by company's internal projects
<noam> shocking
<noam> Why, the very idea that both large corps and NPM / PyPi aren't the epitome of security has me so stunned I'm dying of shock.
<siraben> you should see the security systems of entire countries :P
<noam> lol
<siraben> hospitals/banks, etc.
<noam> I try not to think about the social security system's password requirements
<noam> "Must not be > eight characters. No symbols."
<siraben> what's the requirements?
<siraben> oh?
<noam> at a minimum
<siraben> No way. Seriously?
<noam> Pretty sure it gets even dumber
<siraben> minimum password length of 7 characters, consist of both alpha and numeric/alpha-numeric characters (Letters and numbers or special characters), Passwords are case sensitive.
<noam> Huh, maybe they changed it? (or it's a different site?)
<siraben> found a screenshot from 3 years ago for "" where the pass needs exactly 8 characters
<siraben> and is not case sensitive...
<noam> Ah. That.
<noam> That's, what.
<noam> 208827064576 possibilities?
<noam> Only 208B :P
<siraben> 36^8 right?
<siraben> well it also says needs at least 1 number and 1 letter
<noam> ah
<siraben> counting is hard
<noam> 2.8T, then
<siraben> looks like max len is 20 now
<noam> A lot, but also...... not really.
<daurnimator> siraben: IMO pinning provides no benefit, as we take care of that like 3 layers up
<daurnimator> IMO its all downside
<siraben> daurnimator: what resolves it 3 layers up?
knebulae has joined #zig
midgard has quit [Ping timeout: 240 seconds]
knebulae has quit [Ping timeout: 256 seconds]
hnOsmium0001 has quit [Quit: Connection closed for inactivity]
<daurnimator> siraben: 1. dockerfiles that hashlock to what they clone. 2. private mirrors that only host the versions we want
midgard has joined #zig
[Ristovski] is now known as Ristovski
knebulae has joined #zig
gazler__ has quit [Ping timeout: 272 seconds]
knebulae has quit [Ping timeout: 260 seconds]
Swahili has joined #zig
gazler has joined #zig
geemili has quit [Ping timeout: 272 seconds]
gazler_ has joined #zig
knebulae has joined #zig
gazler has quit [Ping timeout: 260 seconds]
knebulae has quit [Ping timeout: 265 seconds]
gazler_ has quit [Ping timeout: 256 seconds]
knebulae has joined #zig
knebulae has quit [Ping timeout: 240 seconds]
knebulae has joined #zig
eax has joined #zig
eax has quit [Client Quit]
eax has joined #zig
knebulae has quit [Ping timeout: 240 seconds]
knebulae has joined #zig
eax has quit [Remote host closed the connection]
knebulae has quit [Ping timeout: 240 seconds]
knebulae has joined #zig
knebulae has quit [Ping timeout: 272 seconds]
Snaffu has quit [Remote host closed the connection]
knebulae has joined #zig
bitmapper has joined #zig
karchnu has quit [Ping timeout: 265 seconds]
knebulae has quit [Ping timeout: 260 seconds]
tnorth has joined #zig
marler8997 has quit [Read error: Connection reset by peer]
knebulae has joined #zig
lukego has left #zig [#zig]
donniewest has joined #zig
knebulae has quit [Ping timeout: 260 seconds]
knebulae has joined #zig
waleee-cl has joined #zig
xackus__ has joined #zig
xackus_ has quit [Read error: Connection reset by peer]
knebulae has quit [Ping timeout: 265 seconds]
knebulae has joined #zig
cren has quit [Quit: nyaa~]
hnOsmium0001 has joined #zig
<tnorth> Hey there. Based on a previous discussion here lately, I submitted (Structs containing a pointer to array of @This() fails (struct depends on itself)). Since I'm very new to Zig, I would appreciate if someone could quickly go through and tell me if I am missing something or if more info is needed? I'm surprised that noone ran into this issue...
<ikskuh> ah yeah
<ikskuh> this is a bug in the lazy analysis
<ikskuh> it happens with different pointer types as well
knebulae has quit [Ping timeout: 256 seconds]
nvmd has joined #zig
<tnorth> ikskuh: is there a workaround?
<tnorth> ikskuh: a lot of datastructure use this kind of pattern, how is that handled?
<ikskuh> the problem is the array
<ikskuh> ?*@This() works
<ikskuh> ?*[1]@This() doesn't
linuxgemini has quit [Remote host closed the connection]
lunamn has quit [Remote host closed the connection]
ave_ has quit [Remote host closed the connection]
<ifreund> I think ?[]@This() works too
<ikskuh> yeah, it works for pointers
<ikskuh> but i think arrays and functions break
<tnorth> ok, if ?[]@This() works, this should cover all use-cases
<g-w1> not realy, [2] is different because the memory is in the struct instead of outside
<dutchie> though a struct containing an array of itself is never going to work
<g-w1> an optional
<tnorth> but always a pointer to... their size is always known
<g-w1> hmm, yeah ah slice does cover all use cases
<g-w1> but its still a language bug
<ikskuh> the problem is an implementation bug, not a limitation
<g-w1> ok yeah not a language bug, but an impl bug
<ikskuh> language bug would be critical ;)
eax has joined #zig
notzmv has quit [Remote host closed the connection]
notzmv has joined #zig
notzmv has quit [Remote host closed the connection]
<tnorth> Is the issue LLVM-related, or in Zig itself? How much effort would it be to get it fixed?
cole-h has joined #zig
<ifreund> it's probably a pretty trivial change if you manage to find where in ir.cpp/analyze.cpp the bug is
<g-w1> you can just search for depends on itself, there are only 3 occurances
<ifreund> I doubt anyone is motivated to do it though as the C++ codebase not exactly fun to hack on and will be thrown out as soon as the self hosted compiler is ready
<tnorth> Ok, then since there is a valid workaround it doesn't make sense
Akuli has joined #zig
tnorth has quit [Ping timeout: 258 seconds]
craigo_ has quit [Ping timeout: 246 seconds]
dmgk has left #zig [#zig]
eax has quit [Ping timeout: 268 seconds]
donniewest has quit [Quit: WeeChat 3.0.1]
donniewest has joined #zig
leon-p has quit [Ping timeout: 240 seconds]
dyeplexer has quit [Remote host closed the connection]
leon-p has joined #zig
notzmv has joined #zig
<ikskuh> heya
<ikskuh> i need a tad of git help
<ikskuh> i have a branch that is somewhat outdated and want to bring it up to latest master
<ikskuh> is it possible without merging?
craigo_ has joined #zig
<torque> you can rebase it, not sure if that's what you want
<ikskuh> how do i do that?
<ikskuh> i only know "git rebase origin/master", but that will probably rebase master on my branch instead of my branch on master
<noam> ikskuh: `git rebase origin/master` rebases the active branch onto origin/master
<torque> well, in the simple case, on your old branch, something like `git rebase -i master` which will attempt to rewrite old branch commits on top of current master
<ikskuh> noam: so just
<ikskuh> git checkout my_fork_branch
<ikskuh> git rebase master
<ikskuh> ?
<noam> yes
<ikskuh> okay, i'll try
<noam> well - is your local master branch up to date?
<noam> Also note that it might not rebase cleanly - in which case it'll stop in the middle for you to resolve conflicts, commit, and run `git rebase --continue`
<noam> ikskuh:
<ikskuh> thanks :)
lunamn has joined #zig
ave_ has joined #zig
linuxgemini has joined #zig
Swahili has quit [Remote host closed the connection]
<v0idify> how many git-*.io sircmpwn has registered xD
<v0idify> how am I supposed to store a BufferedWriter in a struct?
<ikskuh> ^^
<ikskuh> v0idify: you have to store the typed writer or make the struct generic
<ifreund> alternatively don't
<ifreund> only store the buffer
<ifreund> or wait nvm, the api doesn't make that nice :/
<v0idify> ikskuh, should I use @typeOf or..?
<ikskuh> that's hard to say without more context
<v0idify>, std.fs.File.Reader) works
<v0idify> how do I get an array from a slice?
<v0idify> mem.copy?
<ikskuh> slice with comptime known items:
<ikskuh> var slice: []const u8 = "helllo";
<ikskuh> var array: [3]u8 = slice[0..3].*;
<v0idify> if it's not comptime though? it will crash right?
<v0idify> or.. UB but will crash usually
<v0idify> i.e not on Fast/Small
emptee has joined #zig
<ifreund> for an array you must know the length at comptime
<ifreund> otherwise it's not an array
<v0idify> yes but you said "slice with comptime known items"
<v0idify> not you*
<v0idify> std.debug.print("{any}", .{list.items}); // list is an ArrayList, makes CPU usage go to 100% on runtime for some reason
<Gliptic> it is a slice until you do .*
<v0idify> nevermind it's not that...
<v0idify> ok got it, thanks
<ifreund> v0idify: with comptime known bounds not items
<ikskuh> i meant slice with comptime known items where "slice" is a verb ^^
donniewest has quit [Quit: WeeChat 3.0.1]
<ikskuh> so yeah. the bounds must be comptime known, the contents of the slice are not
donniewest has joined #zig
donniewest has quit [Client Quit]
emptee has quit [Quit: Konversation terminated!]
ur5us_ has joined #zig
leon-p has quit [Quit: leaving]
josuah has quit [Quit: WeeChat 2.9]
sord937 has quit [Quit: sord937]
geemili has joined #zig
<ifreund> andrewrk: I'm working on getting comma separated list in their various forms behaving properly in zig fmt by the way
<andrewrk> I'm shitposting on twi--- working on multi line strings
<andrewrk> are you doing array rows/columns?
<ikskuh> current meta for struct field syntax is snake_case, right?
<ifreund> not yet, first making sure we insert trailing commas and render properly if there are line comments in the list
<andrewrk> gotcha
<ifreund> I'm also wondering if there's a way we could unify all the code preforming essentially the same logic to render a comma separated list of things
<ifreund> the small differences in behavior that are the current status quo annoy me
<ifreund> that refactor could probably happen after this branch is done though
<andrewrk> that makes sense to me. I propose we get to a merge point here as soon as possible
Akuli has quit [Quit: Leaving]
<noam> v0idify: two or three, I think :P
<ikskuh> andrewrk, time for a little compiler error guessing game?
<ikskuh> const Foo = struct {};
<ikskuh> const Bar = struct { name: u32 };
<ikskuh> var foo: Foo = Bar{ .name = 10 };
<ikskuh> what error will this yield? :D
<noam> Hmm.
<noam> Error: cliched identifiers
<andrewrk> error: expected Foo, found Bar
<noam> error: expected Bar, found Foo
<ikskuh> error: no member named 'name' in struct 'Foo'
<noam> ... lol
<ikskuh> yep :D
<andrewrk> let me check how stage2 handles this
<ikskuh> if you add that field to Foo, it will error out with the expected type mismatch
<g-w1> stage2 doesn't handle it at all since it has no struct types yet (except for imports) i think
<andrewrk> error: TODO implement astgen.expr for .StructInitializer
<andrewrk> the stage2 error message is reasonable ;)
<ikskuh> haha
<ikskuh> but yeah, it's a weird and unexpected error message
<ikskuh> but i don't see reason to fix it in stage1
<andrewrk> result locations are handled much, much more cleanly in stage2
<ifreund> andrewrk: why are extern functions fn_proto instead of fn_decl?
<andrewrk> we can rename the tag if you want, but there is no need to pay the cost of an extra AST node per extern function decl since the body is null
<ifreund> ah ok
<ifreund> my main gripe is that firstToken() is broken for fn_proto now, I think I might just make a new tag
<andrewrk> oh snap sorry about that
<andrewrk> ah right of course
<andrewrk> perhaps: extern_decl ?
<ifreund> yeah that's what I just came up with too :D
<andrewrk> :D
<ifreund> though it sounds like it includes extern var decls too, so extern_fn_decl
<andrewrk> was just thinking the same thing
<andrewrk> there is a zig fmt test case that expects this syntax to work: export fn foo() void;
<andrewrk> I think the idea there is to make this a semantic analysis error rather than a parse error
<andrewrk> it's up to you if you want to keep it that way or make it a parse error
<ifreund> alright cool
<ifreund> (took me a minute to realize that was export not extern :D)
karchnu has joined #zig
<karchnu> Hello. Just passing by to say that I'm working on the website translation in french.
<karchnu> Don't hesitate to tell me that someone else is doing it. :) Thanks!
<ifreund> cool! don't know of anyone else working on it yet
<karchnu> Nice. B)
<ifreund> andrewrk: decided extern_fn_proto{_simple,_multi,_one,} wasn't worth it, just fixed firstToken()
<andrewrk> sounds good
<ifreund> hmm, the parser doesn't seem to support toplevel doc comments yet. I'll take a look at that then go to bed
<ifreund> or nevermind, I think this test case is wrong
<ifreund> container doc comments are only valid at the beginning of the contianer right?
notzmv has quit [Remote host closed the connection]
<andrewrk> you mean this form? //!
notzmv has joined #zig
<ifreund> yeah those
<andrewrk> yeah only allowed to be the first thing at the beginning of a container
<ifreund> cool, then the parser is fine, just need to add support in zig fmt
<andrewrk> I'm going through all the multi line string test cases
marler8997 has joined #zig
<ifreund> nice
<andrewrk> tfw you uncomment a test case and it already passes from a previous commit :D
<ifreund> :)
ur5us__ has joined #zig
ur5us_ has quit [Ping timeout: 264 seconds]
<andrewrk> ifreund, do you have any clues about the double indenting problem?
<andrewrk> e.g. the translate-c case we have disabled
<andrewrk> oh it seems to happen because of the grouped_expression
ur5us has joined #zig
<ifreund> hmm, I added the indent stuff for grouped expression to pass some other test case
<andrewrk> I changed it to match master branch and it fixed the issue
<andrewrk> but tbh I don't really understand most of the AutoIndentingStream abstractions
ur5us__ has quit [Ping timeout: 240 seconds]
<ifreund> they're if we want comments between tokens to be rendered indented in some cases
<ifreund> for example in an empty container decl
<andrewrk> ah I see
fengh has quit [Ping timeout: 240 seconds]
<andrewrk> zig fmt is a good example of software that is easy to understand conceptually but the devil is in the details
<ifreund> hrm, lastToken() is broken for container decls it seems, I'm not going to get container doc comments finished tonight
<ifreund> yeah definitely
ed__ has joined #zig
ed_t has quit [Ping timeout: 256 seconds]
fengh has joined #zig