ChanServ changed the topic of #zig to: zig programming language | | be excellent to each other | channel logs:
<andrewrk> mouldysammich, fix pushed to master. a new build should be available on in ~2 hours
<mouldysammich> fantastic, thanks so much!
<andrewrk> thanks for troubleshooting with me :)
kristate has joined #zig
<shachaf> Is there a place that says what the plan for coroutines is?
<shachaf> I remember it was discussed here but I'm still not sure how it's planned to work.
<andrewrk> shachaf, no, the plans for coroutines are really disorganized
<andrewrk> I'm happy to chat a little bit about it if you like
<shachaf> Hmm. I think I might be mixing it up with some things I read about LLVM coroutines, also.
<shachaf> Is there any example code and approximately what kind of thing it should generate, or something like that?
<andrewrk> post-rewrite, it will be as if you split your function into an array of function pointers that each take a pointer to a struct which has all the local variables
<shachaf> All the local variables or only the ones that are live at a particular point in the function?
<andrewrk> only the ones that are live on both sides of any suspend point
<andrewrk> also I haven't solved how coroutine function pointers will work yet. that will probably just be a compile error for a while
<shachaf> This struct is effectively a stack frame, right?
<andrewrk> yes
<shachaf> Why do you need the array?
<shachaf> And also how is it used? The function returns an index into it when it yields?
<andrewrk> the two methods are usually (1) switch statement with enum to decide which case to jump to, or (2) array of function pointers with index of which one to call
<andrewrk> above I described (2). it doesn't matter to the zig coder so much, but my reasoning for choosing one over the other is that LLVM chose (2) and they probably made that decision with knowledge of what LLVM is better at optimizing
<shachaf> So if this is just a struct, are these parameterized by an allocator? I think they were.
<andrewrk> pre-rewrite they are. post-rewrite coroutines are non-allocating
<shachaf> OK, that makes more sense to me.
<andrewrk> if you want a coroutine to go on the heap, then you'd have to do it yourself, like this: const ptr = a.create(@Frame(doTheThing)); ptr.* = doTheThing(); // doTheThing is a coroutine
<shachaf> Why doesn't the struct just store an instruction pointer for the current state?
<andrewrk> that's a neat idea
<develonepi3> andrewrk, at There is a script that I use to install FPC & Lazarus IDE on both my RPi3 and Ubuntu. It located at Tools/Installer/Core/Linux/ For Windows there is an installer and a video on how to update the RTL.
<andrewrk> I think the only reason not to would be to make the optimizer able to understand the IR
<shachaf> If you split the coroutine up into separate functions, and you have internal flow control that goes past yield statements, that gets kind of awkward, doesn't it?
<shachaf> I guess the functions could jump directly into each other or something.
<andrewrk> can you give an example of that?
<shachaf> Or maybe I still don't understand the plan.
<andrewrk> oh are you talking about storing an instruction pointer?
<shachaf> No, the other method.
<shachaf> Hm, this would be easier with concrete code.
<andrewrk> one thing to note: these coroutines are "stackless", aka, they operate on exactly one stack frame. so "yielding" is in machine code terms, a return instruction
<shachaf> Right.
<andrewrk> and "resuming" in machine code terms, a call instruction
<shachaf> Yep.
<shachaf> I did want to talk about how to nest these things but first I should figure out how they work.
<shachaf> Is there any example code I can refer to? Even something very simple.
<shachaf> I'm trying to write something but now I need to figure out how to write basic things in Zig.
<andrewrk> shachaf, what kind of example code?
<andrewrk> status quo coroutine usage?
<shachaf> No, the planned thing.
<andrewrk> here's something:
<andrewrk> this is still a "research" proposal though. I haven't done a proof of concept yet
<andrewrk> even the planned coroutine rewrite is a bit researchy. It's going to require some exploratory code
<andrewrk> I did status quo coroutines, learned a bunch, and it's not quite right. time for a second iteration
<shachaf> There aren't many implementations of coroutines out there that are as efficient as they could be.
<andrewrk> agreed
<shachaf> I'm a bit confused by this code.
<shachaf> Let me see if I can figure it out.
<andrewrk> shachaf, if you had something specific in mind, I'm all ears. especially if you typed out a nice proposal with examples and made it easy to understand
<shachaf> I'm thinking about how to make things like this work! That's why I'm asking.
<shachaf> This is all kind of implicit, is what was confusing me, I think.
<shachaf> Are things like asynchronous I/O (presumably with some kind of scheduler) the expected use case for this thing? If so I'll figure out an example based on that.
knebulae has joined #zig
<andrewrk> yes, asynchronous I/O is a big use case for coroutines. I would even say the primary use case
<wrl> andrewrk: you can model DSP process callbacks as a coroutine ;)
<andrewrk> that's something I'll be sure to explore as well
<shachaf> You can model lots of things as coroutines. But they call for different kinds of implementations.
scientes has joined #zig
scientes has quit [Ping timeout: 250 seconds]
<shachaf> andrewrk: So what happens when an async function wants to call another async function?
<shachaf> Does it just put the other function's state in its frame struct, and then each time it's resumed it goes down the stack to the currently active frame?
<andrewrk> shachaf, and in this example, it wants to resolve the "promise" right away, yes?
<shachaf> What do you mean?
<shachaf> In your example, say queueFsWrite was an async function instead of a thing that takes @handle()
<andrewrk> shachaf, sometimes you would want to kick off async operation A and async operation B, and then "wait" until both of them are done before proceeding. but you're talking about the non-parallel case where you just want to kick off and "wait" for an async function at the same time
<andrewrk> is that correct?
<andrewrk> because this case can be optimized better
<shachaf> Hmm, I'm interested in both, but thinking about the latter right now.
<andrewrk> you will be able to control where the frame of the coroutine goes when you call the coroutine, like this: a = theCoroutine();
<andrewrk> after copy elision work is done, this expression is interpreted in the following way: 'a' represents a "result location". this result location is passed to the function call expression. when you call a coroutine, the active result location is used as the coroutine frame
<andrewrk> so if `a` represents stack memory (e.g. with `var a = theCoroutine()`) then the coroutine goes into the callsite's stack frame (which could be a coroutine frame)
<andrewrk> or if it's a pointer dereference, that could make it go into the heap: `ptr.* = theCoroutine()`
<strmpnk> I take it that some forms of recursion (non-tail calls at least) would prevent this sinking. I assume these would require explicit frame allocation?
<shachaf> Man, just ban recursion and require a known stack size.
<andrewrk> yes, recursion and function pointers are an active area of research. the first implementation will probably just be a compile error if there is a (indirect or direct) call graph dependency
<shachaf> Or let people specify a maximum size if they want.
<strmpnk> But how will async ackermann work? (joking).
<andrewrk> :)
<strmpnk> Problems with max size is the same with stack overflows.
<andrewrk> agreed
<andrewrk> I'll be careful with whatever the solution is to recursion/ function pointers because it could easily be a new footgun that zig introduces that even C doesn't have
<andrewrk> (if I wasn't careful)
<shachaf> It'd be nice if every function was tagged with the maximum stack depth when there are no cycles.
<andrewrk> it's planned for zig to do that in generated .h files
<shachaf> I was also thinking of a thing the other where each function can have a different "calling convention" depending on how many registers it needs, and that's specified in the type.
<shachaf> WIth the regular calling convention being a conservative version that always works.
<andrewrk> that might require some cooperation with LLVM
<shachaf> Yes, I don't know of any compiler backends that support that kind of thing.
<shachaf> But having all sorts of metadata like that would be nice.
<shachaf> Anyway there are two kind of different uses of coroutines that I've been thinking of. One is where the coroutine schedules its own resume, as in async I/O or something.
<shachaf> And the other is where the caller can resume it. As in iterators maybe? There are lots of uses.
<shachaf> But the API seems kind of different to me.
<shachaf> The other day I was using snprintf and I was annoyed that if it runs out of space, you have to call it again from the beginning.
<shachaf> Rather than, say, flush the buffer it was writing to and then asking it to continue from where it left off.
<shachaf> It could be nice if that kind of thing was a coroutine? Its frame would be pretty small.
<shachaf> Or maybe that's too much complexity, I have no idea.
<andrewrk> that's an interesting idea
<andrewrk> I agree that there are a couple other use cases like the one you described
<shachaf> Anyway that's a different kind of coroutine use from async I/O. Mostly because it doesn't really use a scheduler, I guess?
<andrewrk> I don't have a perfect answer yet to how all the use cases will get solved. I'm trying to take a bottom-up approach, starting with foundations that I know are solid, and then slowly building up
<shachaf> Right.
<shachaf> I've worked on an implementation of coroutines for C++ that did full stack switching, but those are presumably less efficient than whatever Zig will end up with.
<shachaf> I mean, it was effectively userspace threads.
<andrewrk> I'm not a fan of that approach
<shachaf> It does support nesting in a pretty nice way.
<andrewrk> yes, there are some places where it fits in perfectly
<shachaf> And the stacks can relatively small for threads, 8KB or something. Of course that's much bigger than it needs to be.
<shachaf> And an annoying thing is that when you switch stacks, you take a bunch of cache misses on the new stack, even for things that don't need to be saved.
<andrewrk> but I think it leaves performance on the table, because concurrency has to be expressed in terms of "actors" (threads). Rather than what you would do in a non-blocking paradigm, where you think about the dependency graph of everything, kick off operations as soon as possible, collect results when they're all done
<shachaf> Yes, there's also that.
<andrewrk> with #1778, if it worked really well, you could theoretically even do profile guided optimization that would determine how much to use the expressed concurrency from the source
<andrewrk> e.g. the source expresses that A and B can be done simultaneously, but it is also legal for the optimizer to just make it blocking
<andrewrk> that could potentially be incredibly powerful
<shachaf> In practice I think the code had "big" things written in terms of these lightweight threads, which could start a bunch of "small" async operations that weren't threads. Or something like that.
<strmpnk> The nice thing about the fibers is that the async fn -> sync fn -> async fn call interleaving can work w/o threading transformations into the sync code. There is a little more yield overhead and of course guard pages and stack growth is a pain (or expensive if you allocate large stacks).
<shachaf> One of the things you want, by the way, is for operations that might or might not block to be able to return immediately without going through the scheduler.
<strmpnk> This would be a bigger issue if you needed to call C code which called back to Zig code for example.
<shachaf> strmpnk: It's nice but I'm not sure it's worth the cost.
<strmpnk> Having used both, I'd say it depends on the application.
<andrewrk> what do you mean the scheduler? the proof-of-concept event loop I made in zig std lib has no scheduler
<shachaf> I mean "event_loop.queueFsWrite(@handle(), &msg);" or whatever.
<strmpnk> Same deal with actors. Supporting selective receive is kind of like a frame retained and stacked across yield (Erlang style actors). Pony on the other hand doesn't allow this but it explodes actor state machine complexity.
<andrewrk> ah. non blocking file system is unfortunate. there are several problems with it
<shachaf> Never mind the word scheduler. I mean, you should be able to do it without suspending.
<shachaf> Say you have a cache in memory, and you have an async operation which either gives you the thing in the cache or issues an async read call.
* strmpnk hopes io_uring lands in 5.1.
<andrewrk> that's even true with status quo coroutines/event loop. aside from file system on linux, which for some weird reason does not support non-blocking, when you do async operations it won't suspend unless EAGAIN is returned
<andrewrk> strmpnk, I'm eyeing that too
<shachaf> whoa, Linux is up to 5? I clearly haven't been keeping track.
<andrewrk> linus doesn't take the major version seriously. it's not semver
<shachaf> I know.
<strmpnk> I'm with Linus, people should lean on change logs more than magic version interpretation.
<strmpnk> Semantic versioning ends up being a wishful stance but it's solution creates just as many problems as it solves.
<andrewrk> shachaf, post-rewrite coroutines, when you call an async function and "resolve" it in the same expression, and it does not suspend, it is 100% equivalent to a function call
<shachaf> I didn't see "resolve" mentioned.
<andrewrk> sorry "await" not "resolve"
<shachaf> Anyway depending on how nesting works that kind of thing could be automatic.
<shachaf> Though of course you don't want it to be, in some cases.
<andrewrk> strmpnk, I have some interesting things planned for package management. I'm still planning to embrace semver, but there will be other tools as well. planning to do a big writeup shortly after 0.4.0 release
<shachaf> Does async disk I/O on Linux still require O_DIRECT?
<strmpnk> andrewrk: I'll avoid rambling about versioning but while rebar3 has rough edges, I really like the version resolution rules. Easy to control and no odd action at a distance problems when upgrading packages.
<strmpnk> The other odd issue is the case of multiple versions in one compilation target. I'm not a fan but it's interesting to see cargo adopt it after all of npm's complexity.
<andrewrk> shachaf, I don't think there is any such thing as async disk I/O on linux
<shachaf> Oh well.
<andrewrk> that's why strmpnk and I are keeping an eye on uring
<strmpnk> I haven't checked up on the mailing list thread but it seems Al Viro's SCM_RIGHTS ref-cycle scenario was addressed so hopefully there aren't other big blockers.
<andrewrk> strmpnk, oh interesting I didn't know rust did that. zig package manager is planned to do that as well, when the semver major versions disagree. There will also be a tool that shows a sort of "audit" display where it shows the full dependency graph, and where there is duplication, to help you be aware of bloat
<strmpnk> How will you work with exported symbols?
<strmpnk> I assume that will require symbols resolve to the same package every time.
<andrewrk> I'll note that exported symbols don't necessarily constitute the public API of a zig package
<andrewrk> most zig packages will have 0 exported symbols
<strmpnk> True, though this will also be an interaction to sort out with C code.
<andrewrk> indeed, that's another interesting thing. I fully intend zig package manager to be a C package manager as well
<andrewrk> you'll have to ditch the build system and use zig build in order to make a C package, but I think it'll be worth it
<andrewrk> to be have a C library dependency graph, that works on every target, and you can cross compile for every target, with no system dependencies
<andrewrk> this is quickly becoming a reality
<AlexMax> i have had to deal with far more cmake than I had ever hoped to in the past week or so
<andrewrk> then we just need fast incremental builds in order to facilitate a deep dependency tree :)
<AlexMax> so that sounds incredibly appealing to me
<andrewrk> we're just getting started. the tooling around zig is going to be out of this world
develonepi3 has quit [Remote host closed the connection]
<shachaf> Man. The more I think about this the less I'm sure I've ever seen an implementation of coroutines that actually does what you would want, in any language.
<andrewrk> shachaf, I hope you'll speak up if you see me going down a bad road
<shachaf> I might be saying nonsense. :-)
<andrewrk> I tried asking the rust team how their coroutines work, and I wasn't able to fully understand it. it had something to do with polling, which was confusing to me
<andrewrk> it is my understanding that in rust coroutines, when you "create" one, it's empty and doesn't start running yet
<andrewrk> until you try to await it, then it starts running
develonepi3 has joined #zig
<strmpnk> Futures are like descriptions of work but they don't actually embody the execution. Tasks do that, which is where the allocation happens.
<strmpnk> (in Rust)
<strmpnk> the cold until polled is an interesting design decision but it reminds me of a lot of the problems I encountered with F# in larger scale apps.
<strmpnk> You end up wanting to allocate tasks far more often than just the top-level one to do things like pipelining I/O.
<shachaf> There are a bunch of implementation of these things that want to call allocators for some reason.
<strmpnk> My Rust Futures experience is limited to tokio though, I can't speak to std::futures and std::tasks, nor the async syntax (which still is getting sorted AFAICT).
<shachaf> Which was never quite clear to me.
<strmpnk> It's clear when you see the kinds of stream processing loops that are common on I/O. You don't always know how much work you will be doing.
<strmpnk> But F#'s Async is the closest thing to Rust's polled Future's that I've actually used.
<strmpnk> It's great if you like combinators I guess.
<shachaf> Hmm, this Rust approach is maybe reasonable?
<shachaf> I'll read about it some more.
<strmpnk> It's not bad. I'm still waiting to see how it works once more of it is completed:
<strmpnk> The impl Future<...> stuff is definitely the only sane way to deal with the types though. Those error messages can yield huge type chains for each generic trait implementation.
<shachaf> So are coroutines that are resumed by the caller just a different kind of thing from coroutines that request their own resumption based on an event?
<shachaf> Or are they both the same sort of thing?
<strmpnk> Coroutines are resumed by a task which is distinct from the caller.
<strmpnk> (in rust)
<strmpnk> The caller is just a piece of code in this case and it doesn't run w/o the task calling poll or a waker telling task to try calling poll again.
<strmpnk> There are some articles out there but it didn't click for me until I tried to write some code and try some things.
<shachaf> I don't mean the Rust thing, I mean in general.
<strmpnk> It depends on the implementation.
<strmpnk> The callback based systems are popular with event loop based reactors.
<strmpnk> That can reschedule when a wait handle of some sort has been triggered.
<shachaf> Where do callbacks get into it?
<shachaf> I mean, sure, you can issue an operation and get notified when it's done and wake up.
<strmpnk> So the coroutine is one half of things. Somewhere at the top you need to tell your I/O scheduler to hold onto it.
<shachaf> Right. But that's not a language feature, just a library.
<strmpnk> In F# it's something like `myAsyncThing |> Async.RunSynchronously` and that will either dispatch to a thread which will juggle at yieldpoints or to a thread pool. It's just a library (FSharp.Core is worth a read if you want to avoid digging into reactors that hook up to epoll and the like).
<strmpnk> (`|>` is function application as a pipeline operator there if you haven't seen it, lots of languages have copied it from F# since)
<strmpnk> Same deal with Lua for example. You can create a coroutine but the asymmetric interface still means the caller needs to wire it up when it yields.
<strmpnk> In contrast, let's use C#'s Tasks for an example of hot execution. Once you get a task handle, it already exists as a working item in some pool. It also has executed up to the first yield point before returning to the creator of the task IIRC, which Rust doesn't do.
<strmpnk> That implementation is also callback based in the scheduling sense. Internally it uses the state machine transformation with case labels to control reentry.
<strmpnk> Rust's use of waker's sidesteps callbacks in a traditional sense but poll() itself is a callback.
<strmpnk> I'm probably not explaining it well. Perhaps I'll try to draw something up at some point as it's come up with others I've discussed this with recently.
<strmpnk> Erlang is probably the most unique in that it uses green threads but they are not native stacks (similar to Lua here). The reduction counters offers a sort of user-space preemption system that is independent of yielding. While it's not "fast" it can lead to very responsive systems.
<strmpnk> I'm more interested in state machine representations right now than the implementation of a pollset + eventloop system in Zig. If it can be competitive with performance and retain clear allocation patterns then I think the I/O stuff will follow through naturally. I'd almost discourage the use of the keyword `async` for this reason, but it's practical and familiar.
dewf has quit [Quit: Leaving]
schme245 has joined #zig
redj has quit [Read error: Connection reset by peer]
fsateler_ has joined #zig
fsateler has quit [Ping timeout: 245 seconds]
schme245 has quit [Remote host closed the connection]
schme245 has joined #zig
schme245 has quit [Remote host closed the connection]
_whitelogger has joined #zig
diltsman has quit [Ping timeout: 256 seconds]
klltkr has quit [Ping timeout: 252 seconds]
kristate has quit [Remote host closed the connection]
Zaab1t has joined #zig
Zaab1t has quit [Client Quit]
kristate has joined #zig
presiden has quit [Ping timeout: 245 seconds]
sjums has quit [Ping timeout: 245 seconds]
capisce has quit [Ping timeout: 245 seconds]
kristate has quit [Read error: Connection reset by peer]
presiden has joined #zig
gamester has joined #zig
Zaab1t has joined #zig
redj has joined #zig
Zaab1t has quit [Ping timeout: 272 seconds]
MajorLag has joined #zig
tgschultz has quit [Ping timeout: 246 seconds]
MajorLag is now known as tgschultz
gamester has quit [Read error: Connection reset by peer]
gamester has joined #zig
kristate has joined #zig
Zaab1t has joined #zig
MajorLag has joined #zig
return0e_ has joined #zig
Zaabtop has joined #zig
wilsonk has quit [Ping timeout: 246 seconds]
Zaab1t has quit [Ping timeout: 255 seconds]
tgschultz has quit [Ping timeout: 255 seconds]
knebulae has quit [Ping timeout: 255 seconds]
mouldysammich has quit [Ping timeout: 255 seconds]
so has quit [Ping timeout: 255 seconds]
wink_ has quit [Ping timeout: 255 seconds]
return0e has quit [Ping timeout: 255 seconds]
Zaabtop is now known as Zaab1t
MajorLag is now known as tgschultz
so_ has joined #zig
mouldysammich has joined #zig
Zaab1t has quit [Quit: bye bye friends]
kristate has quit [Ping timeout: 264 seconds]
knebulae has joined #zig
return0e has joined #zig
return0e_ has quit [Ping timeout: 250 seconds]
kristate has joined #zig
halosghost has joined #zig
gamester has quit [Read error: Connection reset by peer]
Zaab1t has joined #zig
Zaab1t has quit [Quit: bye bye friends]
gamester has joined #zig
<gamester> So I could compile all my C dependencies with zig, and therefore lose the dependency on glibc later on, therefore making shipping binaries on linux easier?
<andrewrk> gamester, yep exactly. that will require to be done though, and also zig package manager will further make this easier
<andrewrk> and then we're going to want the fast incremental compiler (still W.I.P.) as lines of code explodes due to people taking advantage of this
<andrewrk> (I don't see lines of code as a bad thing. Complicated software has a lot of dependencies, and it is a perfectly valid use case of zig to iterate quickly on a project which has compile millions of line of code)
<andrewrk> it'll take some time before zig can handle that use case though
<gamester> andrewrk: why would the lines of code explode? it's c code though which the incremental compiler doesn't compile?
<gamester> don't you just cache object files for c code?
<andrewrk> gamester, I'm predicting human behavior - if dependencies are easy then people will use them, and thus projects start to have more lines of code
<andrewrk> we don't cache object files for C code yet, and we don't cache object files for zig code yet either
<andrewrk> you can enable that with `--cache on` but it's not integrated into the build system yet
<gamester> okay I understand now, I just read that wrong
dewf has joined #zig
kristate has quit [Ping timeout: 250 seconds]
bheads has joined #zig
gamester has quit [Read error: Connection reset by peer]
gamester has joined #zig
Akuli has joined #zig
dewf has quit [Quit: Leaving]
scientes has joined #zig
<emekankurumeh[m]> when you say "build system" do you mean build.zig files.
<andrewrk> emekankurumeh[m], yes, and I also mean that the stage1 compiler does not do any object-level caching, unlike the stage2 compiler
<emekankurumeh[m]> hmm, I could have sworn that I've seen object files in the zig-cache directory.
<andrewrk> those aren't cached, they are created every time
<andrewrk> it's important to leave them lying around, for example, on macos so that stack traces can work
Zaab1t has joined #zig
schme245 has joined #zig
Zaab1t has quit [Quit: bye bye friends]
schme245 has quit [Remote host closed the connection]
schme245 has joined #zig
schme245 has quit [Ping timeout: 244 seconds]
scientes has quit [Ping timeout: 246 seconds]
Dennis0 has joined #zig
wootehfoot has joined #zig
<emekankurumeh[m]> is there any particular reason that the build system doesn't cache object files?
wilsonk has joined #zig
so_ has quit [Ping timeout: 246 seconds]
so has joined #zig
gamester has quit [Read error: Connection reset by peer]
<emekankurumeh[m]> so `--cache on` is a WIP feature?
Akuli has quit [Quit: Leaving]
<andrewrk> no, `--cache on` is fully functional and has no known bugs
<andrewrk> it's on by default for `zig run`
<andrewrk> it caches the final result and does a full rebuild if the cache is invalidated
<andrewrk> build system caching is a different layer; it operates at the "step" abstraction level. so if your build script creates 2 executables for example, build system caching would be the thing that only rebuilds 1 of them if only 1 changed
<andrewrk> that effort hasn't begun yet
<andrewrk> and finally the stage2 incremental builds is an entirely separate layer. I've done a simple proof of concept of it, but that's on hold until coroutines rewrite is done
reductum has joined #zig
reductum has quit [Quit: WeeChat 2.4]
<andrewrk> I just pushed a breaking change regarding --target-* args, let me know if anyone has any questions
knebulae has quit [Ping timeout: 244 seconds]
gamester has joined #zig
dewf has joined #zig
halosghost has quit [Quit: WeeChat 2.4]
Dennis0 has quit [Quit: Page closed]
wootehfoot has quit [Quit: pillow time]
<andrewrk> runs zig fmt on a set of files/directories
return0e has quit [Ping timeout: 240 seconds]
<gamester> andrewrk: I was asking around on #wayland and it seems that the platform's (like Ubuntu) libwayland-client has to be dynamically linked to use Wayland. Probably no static linking and definitely no implementing the protocol yourself. So it's like windows' system libraries essentially. For why, see the pastebin which expires in 1 week: - I don't know what to make of this, it's beyond my knowledge.
<andrewrk> gamester, I agree with your reasoning in this paste
<andrewrk> I've run into similar issues when looking into this as well
<andrewrk> so here's what it means for zig and for you using zig
<andrewrk> (side note, I also agree with your frustration that nobody seems to care that the interface is the C library and not the actual protocol)
<andrewrk> I think the interface should be the protocol, not the C library.
<andrewrk> anyway, what it means for zig is that we recognize dynamic glibc as a target. for example: -target x86_64-linux-gnu
<andrewrk> after is solved this will work on any target (even when building on windows targeting linux for example), and it will then create a linux binary that only works with glibc dynamically linked with the standard path.
<andrewrk> for other applications that do not need to dynamically link glibc, they can use the default linux C abi. an example target triple with this is `-target x86_64-linux` (the abi is left off and will default to "musl")