<daurnimator>
fengb: what do you mean "leaving out"
<daurnimator>
fengb: so one thing I want to do is automatically create the event loop
<daurnimator>
fengb: essentially have a global pointer `current_event_loop`, which is a linked-list of nestable event loops
<daurnimator>
you can "get current event loop", which is just a pointer access
<daurnimator>
but then: low level async operations pass *which* event loop to do it in
ltriant has quit [Ping timeout: 245 seconds]
<daurnimator>
higher level functions would use the current loop. or if it's null: create an event loop; do the operation in it; and then destroy that loop
<daurnimator>
I can demo the equivalent in lua if you want :)
<daurnimator>
fengb: ah. sounds like that conflicts with the advise in 2910
<fengb>
I think the reasoning is that wasm-freestanding behaves like libraries since there’s not an actual main, whereas something like WASI provides an actual main
ltriant has joined #zig
<fengb>
daurnimator: I actually have no idea how an event loop works. I just use it everyday in JS land >_>
laaron- has quit [Remote host closed the connection]
laaron has joined #zig
SimonN has joined #zig
SimonNaa has quit [Ping timeout: 272 seconds]
<daurnimator>
fengb: there's many ways to do them; but very few designs are composable with other event loops
<daurnimator>
and that is super important for writing libraries
knebulae has quit [Read error: Connection reset by peer]
knebulae has joined #zig
laaron has quit [Remote host closed the connection]
laaron has joined #zig
laaron has quit [Remote host closed the connection]
<abbiya>
zig build-exe main.zig --library stdio --library c
<abbiya>
/home/seshachalamm/httpc/main.zig:4:14: error: expected type '[*c]const u8', found '[12]u8'
<abbiya>
s.printf("Hello world\n");
<abbiya>
^
<mikdusan>
`_ = s.printf(&"Hello world\n\x00");`
<abbiya>
error: expected type '[*c]const u8', found '*[13]u8'
<abbiya>
_ = s.printf(&"Hello world\n\x00");
<mikdusan>
maybe old version of zig? zig version
<daurnimator>
you probably want `c"hello world"` for a C string
<daurnimator>
(which will be the correct type; and also null terminated)
<abbiya>
const hw = c"Hello world\n";
<abbiya>
_ = s.printf(hw);
<abbiya>
thanks, i am going through docs
avoidr has joined #zig
<abbiya>
is there a benefit of writing c code as zig code instead of plain old c ?
<daurnimator>
abbiya: pros: more safety; nicer language (subjective). cons: not all macros will work; translate-c has bugs
<daurnimator>
(bugs in that they crash the compiler; or leave out certain types; no known bugs in code emitted if it succeeds AFAIK)
<abbiya>
ok
<mikdusan>
pros: comptime, slices, optionals, error unions, union enums
<daurnimator>
mikdusan: those would be pros of writing zig code as zig code; vs "C-in-zig"
<gonz_>
abbiya: Are you looking to use Zig mostly to write code that ends up being used from C or things using C for FFI, or are you looking to make freestanding things with limited interaction with C libraries?
<gonz_>
For learning the language I have a suspicion that the latter is a bit easier and probably more informative in the end.
<mq32>
hey
<mq32>
i have two separate folders (lib and app)
<mq32>
how do i @import the main file from lib?
<daurnimator>
mq32: generally... you don't. unless you're looking for @import("root")?
<mq32>
daurnimator: my favourite would @import("mylib")
<mq32>
but i can also do @import("../lib/main.zig")
<mq32>
okay, i cannot use "../lib/main.zig2 :(
<mikdusan>
why not?
<mq32>
error: import of file outside package path: '../gl/main.zig'
<mikdusan>
zig --main-pkg-path ..
<mq32>
hm
<mikdusan>
default pkg path is "."; change it to ".." and @import("gl/main.zig" should be found
<mq32>
but i'd like to build a package and import it
<mq32>
right now i've done a lib step, but that probably doesn't what i want
<mikdusan>
oh i c.
<mq32>
ah
<mq32>
exe.addPackagePath("gl", "gl/main.zig");
<mq32>
this looks right and enables me to @import("gl")
<mikdusan>
i'm guessing that's build.zig analog to `--pkg-begin gl ../gl/main.zig --pkg-end`
<mq32>
yes, it is
<mq32>
--verbose verifies that
<gonz_>
If something depends on something else, wouldn't it make more sense to actually place it in that subdirectory?
<gonz_>
Isn't that the issue you're running into with `..`?
<mq32>
gonz_: I'm trying to create both package and a test application that uses that package
<samtebbs>
I see, so this happens at the zig IR level
<samtebbs>
I really need to learn how comptime code is run. Do you know scientes ?
<scientes>
samtebbs, i've implemented quite a bit of it
<scientes>
for my SIMD patch set
<scientes>
and there is a big over-haul that is needed
<scientes>
so that you can comptime reference inside an array
<scientes>
like (&a[4]).* + 5
<scientes>
samtebbs, comptime is just an interpreter of the same language
<scientes>
intepreters are not new, what is novel is having an interpreter for the *same language*
<samtebbs>
I thought so, so there's a chance that there's discrepencies between what the interpreter does for a certain instruction, and what the native hardware does for the machine code corresponding to that instruction
<scientes>
samtebbs, that's why we have tests :_)
<fengb>
Is it similar to Lisp macros?
<scientes>
no
* scientes
doesn't know about lisp macros, however its not macros
<samtebbs>
I'm impressed at how robust it is, I've never encountered any issues.
<fengb>
I suppose Lisp is just AST manipulation so there's very little actual logic
<samtebbs>
Although that could be because the vast majority of my code runs at runtime
<mq32>
fengb: yeah. you can implement a "full" dialect of LISP in sed (stream ed) by pure text replacement
<scientes>
samtebbs, if you try to do something fancy you will run into issues
<scientes>
not correctness, but that it is still limited in what it can do
<fengb>
I'm always impressed that comptime does what I want. I didn't have to learn anything new to make it work
<scientes>
that's the whole point
<scientes>
and its great!
<fengb>
I'm happy it works so well
<fengb>
My brain always explodes with macros
<samtebbs>
Is Zig the first language/toolchain to have the concept of comptime? I know other languages have metaprogramming but it feels different
<mq32>
samtebbs: it's the only language i know that does it so *smoothly*
<fengb>
I think it's the only one that does it seemlessly. Every other one does it with special syntax and/or semantics
andersfr has joined #zig
andersfr has quit []
LargeEpsilon_ has quit [Ping timeout: 246 seconds]
carlin is now known as tines9
LargeEpsilon has joined #zig
marijnfs_ has quit [Ping timeout: 245 seconds]
marijnfs_ has joined #zig
lunamn has quit [Ping timeout: 245 seconds]
casaca has quit [Ping timeout: 252 seconds]
casaca has joined #zig
waleee-cl has joined #zig
Akuli has joined #zig
casaca has quit [Ping timeout: 248 seconds]
casaca has joined #zig
bheads has joined #zig
<bheads>
here is a fun one: PHINode should have one entry for each predecessor of its parent basic block!
<Tetralux>
fengb: Jai has it too.
casaca has quit [Ping timeout: 272 seconds]
<fengb>
Jai isn't real. But I thought it has special syntax for that (when instead of if)
<Tetralux>
That's Odin.
<Tetralux>
Jai does #if
<Tetralux>
Also saying "it isn't real" is overly pedantic - but I know what you mean.
<fengb>
But still, it's different syntax. Zig tries to let you support either in any context
<Tetralux>
It's different for a reason.
<Tetralux>
To make it clear you want it evalulated statically.
<fengb>
My point is Zig does it both semantically and syntactically
<Tetralux>
Otherwise, the code you write is identical.
<Tetralux>
I'm not sure I understand what you mean xD
casaca has joined #zig
<fengb>
Does Jai let you convert any function to run at compile time?
<Tetralux>
That procedure could be run at compile or runtime.
<Tetralux>
You'd just call it as `#run min(x, y)`, `#if min(x, y) == n` or call it normally in a comptime-run function.
<Tetralux>
(.. `i :: T` is the equiv of `const i: T = zeroed-but-with-default-field-values-set` in Zig.)
casaca has quit [Ping timeout: 248 seconds]
<Tetralux>
(.. In the case above, it's equiv to `pub fn min(...) {...}`
casaca has joined #zig
LargeEpsilon has quit [Ping timeout: 248 seconds]
LargeEpsilon has joined #zig
casaca has quit [Ping timeout: 272 seconds]
casaca has joined #zig
casaca has quit [Ping timeout: 268 seconds]
casaca has joined #zig
LargeEpsilon has quit [Ping timeout: 246 seconds]
<Tetralux>
Q: What actually DOES the ArenaAllocator do?
<Tetralux>
Like - I'm used to arenas being just a mempool.
<Tetralux>
But glancing at the code for it
<Tetralux>
Looks more like it's allocating a single bucket for each thing you alloc.
<fengb>
It’s a manual arena that you destroy at the end of the context
casaca has quit [Ping timeout: 245 seconds]
SimonN has quit [Remote host closed the connection]
casaca has joined #zig
<fengb>
You generally initialize it at the beginning of a known lifetime and destroy it at the end, like an http request
allan0 has quit [Ping timeout: 268 seconds]
SimonNa has joined #zig
casaca has quit [Ping timeout: 248 seconds]
casaca has joined #zig
casaca has quit [Ping timeout: 258 seconds]
casaca has joined #zig
porky11 has quit [Read error: Connection reset by peer]
porky11 has joined #zig
casaca has quit [Ping timeout: 268 seconds]
casaca has joined #zig
<andrewrk>
we have a green CI for the async functions branch
* donpdonp
claps
<fengb>
🥳
<torque>
maybe this has been answered and I missed it, but for the new async semantics, since functions can be synchronous or async based compile-time configuration and the caller is not supposed to have to care, what happens when you do `var myframe = async someFunction()` where someFunction is actually synchronous? does the frame just act as a simple wrapper for the result (and would calling cancel on it still result
<fengb>
Vague thought: is this some form of baby RAII?
<andrewrk>
what even is RAII?
<fengb>
Probably no need for copy, but we'll need to pair acquire and release
<andrewrk>
people use RAII to mean conflicting things
<fengb>
Honestly, I don't even know lol
<andrewrk>
this is a bit different right? C++ has the concept of a type having a constructor/destructer. this is function-based
<fengb>
Yeah
<andrewrk>
as an example, std.process.argsAlloc has std.process.argsFree
<torque>
andrewrk, gotcha. really excited about the async semantics, and having the freedom to use synchronous/async code interchangably is really cool
<andrewrk>
if you just tried to put the destructor of [][]u8 then you might free it wrong. in this case there is only 1 allocation, but argsFree knows that
<andrewrk>
that interchangably part is not done yet
<andrewrk>
but I'm hoping to make significant progress on that today
<torque>
I guess I should have said it's a really cool goal
<fengb>
So is this proposal adding a sort of "annotation" to indicate the cleanup cycle?
<fengb>
I just read the updated title >_>
<andrewrk>
fengb, yes. in the context of cancel, what that means is that @Frame(func) for a function that is inferred to be non-async, is nothing more than a wrapper for the return type of the function, which will have the answer after async returns
<Sahnvour>
andrewrk: really glad this is being reconsidered. weakiest point of zig for me atm
<Sahnvour>
weakest*
<andrewrk>
I'm super open to syntax suggestions on this one. It's a bit tricky because functions may or may not return error unions
<bheads>
782, time to look at ownership?
casaca has quit [Ping timeout: 268 seconds]
casaca has joined #zig
<fengb>
Stupid question: is cancelled British or American?
<halosghost>
both?
<halosghost>
(afaik)
<fengb>
And canceled is distinctly American?
<torque>
two ls is primarily british english I believe, yes
<fengb>
I wonder where I learned double-el
<nrdmn>
has anyone looked into building zig with BUILD_SHARED_LIBS=ON?
<bheads>
in a normal fn call ownership would transfer from .create to result in scope, then transfer to the return
<bheads>
in the async case the return would not transfer ownership when canceled
Akuli has quit [Quit: Leaving]
porky11 has quit [Quit: Leaving]
<bheads>
in fact you can also detect ownership when transferred to underscore: _ = try foo();
waleee-cl has quit [Quit: Connection closed for inactivity]
waleee-cl has joined #zig
<bheads>
I also wonder if you could make the ownership strongly typed, ie: the ownership type is tied to the allocator that created it, and would be an error to free it with a different allocator
casaca has quit [Ping timeout: 248 seconds]
<Tetralux>
Just my two cents, most of the time, I'd be taking ownership of the thing that's getting returned.
<bheads>
correct
<Tetralux>
I would not be allocating something only to want to clean it before returning.
<Tetralux>
I'd actively want to avoid that.
casaca has joined #zig
<Tetralux>
I might on a File though.
<bheads>
the point is to detect that you do free it correctly
<Tetralux>
A common pattern I have is to never clean up my memory.
<bheads>
you can say the same with files
<Tetralux>
It gets dealloc'ed anyway on process exit and my user doesn't want to have to wait for it.
<bheads>
but only true foe short lived programs
<Tetralux>
Indeed.
<Tetralux>
Though using a freelist, or scratch allocator is the same effect.
<Tetralux>
Random allocations is something I like to avoid.
<Tetralux>
So most of the time it's a scratch allocator, ring buffer, memory pool, or after-process-exit.
<Tetralux>
If I'm deallocing memory, it almost certainly not on the heap.
<Tetralux>
Or if it is
<Tetralux>
..rather
<Tetralux>
It's in a bucket array or something.
<Tetralux>
If it's just a random allocation, then it's prob not final code.
<Tetralux>
It's probably just hacked together to get it working.
<Tetralux>
But I often skip that step and just use a pool or something to begin with.
<Tetralux>
I do like the idea of annotating the proc with this though.
<Tetralux>
`var res = errclean try Res.get(address);`
<Tetralux>
`var res = noclean try Bank.get(0);` // manual cleanup
<Tetralux>
I'm not sure about noclean really.
casaca has quit [Ping timeout: 268 seconds]
<Tetralux>
I'd like something clearer than that ideally
<Tetralux>
But manualclean is too long.
<Tetralux>
I'd be using it all the time.
<Tetralux>
I do like the clean/errclean/noclean thing though.
<Sahnvour>
allocating from a pool instead of the heap directly does not spare you from having to free it (except if you don't care)
casaca has joined #zig
<Tetralux>
It does not spare you, no. But it does spare you from having to free each individual thing.
<Tetralux>
.. so you'd noclean the individual things.
<scientes>
yeah arena alloc is great
<Tetralux>
(because attempting to clean them up is a waste of time.)
<Sahnvour>
not necessarily
<scientes>
its also much faster
<scientes>
and when you are going to free is soon anyways
<Tetralux>
Alloc a gig of virtual memory, never free it, use pool on it xP
<Sahnvour>
if you use a pool for a long-lived piece of code, you may want to free individual items to keep total memory usage low
<Tetralux>
You would not free, you'd use a freelist.
<scientes>
Tetralux, haskell allocs a terabyte of memory and never frees it
<Sahnvour>
that's exactly a free
<scientes>
the compiler that is
<Tetralux>
scientes: give whoever made virtual memory a cookie
<Tetralux>
I mean, no... not a free, literally. But I know what you mean.
<Sahnvour>
you're doing poolAlloc.free(thing), that's a free :p
<Sahnvour>
it does not have to give memory back to the OS
<gonz_>
Not a `free` as much as a "have your memory back, bitch, I'm done"
<gonz_>
If we're talking about the "allocate a shitton, never actually free, just finish" methodology
<Tetralux>
I like the BucketArray philosophy for long-running stuff.
mipri has joined #zig
<Sahnvour>
I was specifically talking about freeing _individual elements_ you'd get from a pool, not the whole thing. You still want to be able to reuse memory slots in your pool when some objects are no longer needed, and to do so you need to deallocate them.
<Tetralux>
You allocate N Ts at a time, if you use all of them at any one moment, allocate another N Ts, link them together.
<Tetralux>
I call a Pool, when you allocate a bunch, and then free all at once.
<Sahnvour>
I don't think that's how this word is usually understood, but maybe it's just me
<Tetralux>
I call a freelist when you have a bunch of Ts in an array, and each one is either being used or not.
<Tetralux>
Casey uses a linked-list to keep track of those, for instance.
<Tetralux>
( .. which ones are empty, that is.)
<Tetralux>
Also yeah
<Tetralux>
I'm not into the terminology.
<Tetralux>
xD
<Tetralux>
I just like one-or-two-syallable names for things.
<Tetralux>
Like Arena, Pool, FreeList.
<Sahnvour>
what's the difference between your pool and arena then ?
<Tetralux>
Was just gonna say - the first two are actually kinda the same.
<Sahnvour>
:P
<Tetralux>
Both may or not may not take a fallback allocator or something.
<Tetralux>
Probably not.
<Tetralux>
But if one did, Arena prob would and pool wouldn't.
<Tetralux>
There's also Scratch, which is just a ring buffer on top of a Pool.
<Tetralux>
It's also worth noting that a bucketarray can be on top of a pool too. ;)
<Tetralux>
So effectively, the elements are damn-near contiguous.
<Tetralux>
No - actually they _are_ contiguous.
<Tetralux>
So much fun x)
<nrdmn>
Tetralux: I have a list of memory slices. Which allocator do I want to use to allocate memory out of them?
<Tetralux>
Where do you the slices come from?
<Tetralux>
Is the data in them something you wanna keep around?
<Tetralux>
Or is it more just that you allocate some space, run out of it, and so allocate some more space?
<nrdmn>
the slices are just empty space and I can use them for anything
<Tetralux>
Do you have a fixed number of them, or is just "make as many as you need dynamically"
<Tetralux>
If the latter, mem.ArenaAllocator basically.
<nrdmn>
I have a fixed number of them
<Tetralux>
I don't really have enough information to tell you with confidence xD
<fengb>
Is it something like a FixedBufferAllocator but you need to use a few different ones and pretend it's one?
<fengb>
few different buffers*
<Tetralux>
It depends on the lifetime of the stuff in the slices, and how big they are.
<nrdmn>
Tetralux: alright, I'm interested in OS development and if you're working directly on the hardware you usually get a list of non-contiguous memory areas that you can use with some areas in between that you're not supposed to touch
<Tetralux>
..because a process is using them?
<fengb>
I dont' think there's an existing allocator that tries to pretend its one memory space
<fengb>
But... it's possible!
<THFKA4>
because that's how the hardware is wired up on the bus i think
<nrdmn>
Tetralux: because the device's firmware is using it, or because devices have their memory mapped there
<nrdmn>
if you're using linux, you can try `dmesg | grep BIOS-e820` to see such a list of usable and non-usable memory areas
<Tetralux>
Hmmm. Depends on scope a bit, but my first thought is a freelist-style thing for tracking empty space.
<Tetralux>
That gives you a []u8 of empty space, which you then divvy up when someone wants to allocate.
<fengb>
We also don't have a good general purpose allocator
<Tetralux>
If you wanted to be somewhat smart about it, you might try to store things of similar size next to each other, in a bucketarray.
<fengb>
I'm curious if this should be a part of the GPA or if it could be split out
<Tetralux>
fengb: What part? xD
<fengb>
The disparate memory spaces
<fengb>
My zee_alloc is able to handle split segments... but it assumes all the blocks are of fixed size so that's no bueno
<fengb>
(Theoretically... wasm doesn't ever provide non-contiguous blocks so I've never tested it)
<Tetralux>
Better to have multiple GPAs, each on top of a freelist.
<Tetralux>
And the freelists track empty space in each empty segment.
<fengb>
I think the idea is to have 1 allocator. It'll be much easier to consume that way
<Tetralux>
I was assuming the programmer specifically wanted to have several disparate spaces.
<fengb>
If it's separate, we can just have FixedBufferAllocator sit on top of each buffer, and slap a GPA on top of that
<fengb>
of each one*
<Tetralux>
You can't reset a fixedbufferallocator without freeing it though right?
<fengb>
Oh
<fengb>
Well your GPA that sits on top can do the freeing
<Tetralux>
Like, "I want you to forget that I ever alloced anything, but reuse the memory."
<Tetralux>
free = give back to OS, btw.
<fengb>
You can just reuse the memory with a new FBA
<Tetralux>
Maybe virtalloc a big buffer, like stupid big, and then put all the similarly-sized big things at the far end and the similarly-sized small things at the near end.
<Tetralux>
And then when the "ends" overlap, you alloc another big block and prepend the new one to a freelist.
<Tetralux>
Well - I guess you'd actually want to append it - because you'd want to put small things that can actually still fit into the first block still.
<Tetralux>
But maybe for speed, most things aren't gonna fit after some point, so then you prepend it.
<Tetralux>
I should make a FBA that you can reset, and a bucketallocator for the stdlib.
<Tetralux>
That'd be useful.
<Tetralux>
And/or a scratch.
<Tetralux>
RingAllocator*
<Tetralux>
I'd use the heck out of those things.
<fengb>
Just import zee_alloc :P
<fengb>
(Don't. I've barely tested it with a real project)
<Tetralux>
Should we have a constant-time mem.eql in the stdlib btw?
<fengb>
Like a secure compare?
<andrewrk>
there's an issue for contant-time mem eql
<Tetralux>
fengb: precisely.
<fengb>
That might actually be better for the default anyway. Same worst case and hard to use wrong
<fengb>
The mac build has been timing out regularly :/
halosghost has quit [Quit: WeeChat 2.5]
<gonz_>
The world is so crazy with regards to security nowadays. The amount and magnitude of things you have to care and know about is staggering.
<gonz_>
Gone are the innocent days of buffer overruns and equally primitive things
<huuskes>
there's an easy solution to that
<gonz_>
When security was so bad stopping an SQL injection or buffer overflow kept pretty much everyone out
<gonz_>
huuskes: If the solution is to use a managed language, some of those things are just punting the issue to someone else and hoping they have a solution.
<gonz_>
The kinds of things you have to worry about nowadays are things that even other people could conceivably fail at.
<Tetralux>
andrewrk: What's the over-under on allowing things like `var x, y: usize = 0;` ?
<gonz_>
And indeed they have; we have security flaws deep in the stack, in the hardware, etc.
<huuskes>
no, the solution is the hakuna matata way of life haha
<THFKA4>
on the ARM Cortex-M3, the umull opcode takes 3 to 5 cycles to complete, the “short” counts (3 or 4) being taken only if both operands are numerically less than 65536 (so says the manual, but see below). Thus, such a multiplication should be made constant-time by forcing the high bits to 1.
<THFKA4>
uhhh, are we going to have constant time ops per arch in stdlib?
<scientes>
Tetralux, i had a similar proposal: x = y = 0;
<scientes>
but I don't think it has much change (and I don't care much)
<scientes>
*chance
<scientes>
IBM cloud doesn't even offer PowerPC
<gonz_>
huuskes: Ah, yeah. I half agree. There are obviously lots of cases where not caring just isn't acceptable, though.
<scientes>
oh srry, wrong channel
<Tetralux>
Surely the solution is to make a comptime program that able to scan every AST in your program and check it for weirdness :3
<huuskes>
gonz_: like?
<mq32>
gonz_: yeah it's creepy how hardware itself is vulnerable (see meltdown, spectre, …)
<mq32>
also consttime would be a cool feature
<mq32>
even if it just means to burn CPU cycles for nothing
<gonz_>
huuskes: If you make it easy to leak someone's password because you're not up to date on how people can figure it out, that's not really acceptable. Even small, low impact deployments, if they're publicly accessible, can easily have flaws that ripple out to bigger deployments for many reasons.
<gonz_>
There's just a world of hurt out there. I think it's creepy, as mq32 said.
<scientes>
mq32, and that hardware is working as designed!
<mq32>
scientes, gonz_: yeah. the question is? was it the right design?
<mq32>
and i'm more on the "yes" side of the question
<scientes>
mq32, ./crypto/modes/gcm128.c
<mq32>
yeah, it makes security flaws, but this stuff brings a lot of performance
<nrdmn>
writing your own EFI apps is quite fun tbh
<nrdmn>
and it gives you some understanding of what your firmware initializes before it loads your OS
<Tetralux>
I wouldn't mind fiddling with that at some poiknt.
<Tetralux>
Best get-started resources?
<nrdmn>
Tetralux: the EFI spec. It's well written. Stay away from gnu-efi and edk2 until you know how it works
<Tetralux>
By 'stay away' - the alternative?
kristoff_it has quit [Ping timeout: 246 seconds]
<nrdmn>
write your own to learn how the basics work, then switch over to either of them
<nrdmn>
I'm currently working on bringing basic efi support to zig's standard library
<andrewrk>
nrdmn, I just got the big branch I was working on green, so I should be able to merge that into master today and have a more balanced attention span for PRs & bugs
<nrdmn>
basically efi apps are just relocatable exe files that get a list of function pointers passed to their entry point
casaca has joined #zig
<nrdmn>
andrewrk: take your time :)
casaca has quit [Ping timeout: 246 seconds]
<mq32>
nrdmn, how much work would a basic "graphics output" with EFI be?
<Tetralux>
Is there a way to discover what arch you're running on as part of the loading process?
<Tetralux>
Or do you just have to compile the code for the arch you then attempt to load it on.
<Tetralux>
Namely, is it possible for the EFI program to bundle code for multiple archs and then run the one it's loaded on?
<nrdmn>
I don't think that is possible, but I'm not sure
<nrdmn>
see 3.5.1.1 (page 87). To create a boot medium supporting multiple architectures, you can put one file for each architecture there and the firmware loads whatever it can
<andrewrk>
mikdusan, that's the next thing I'm working on, need to figure out how cancel works first
<andrewrk>
your syntax is good
<shachaf>
Do Zig coroutines support local use with explicit context switches (like e.g. Python generators) as well as the pseudo-thread use?
<shachaf>
I guess I should watch that video I still haven't looked at.
ltriant has joined #zig
kristoff_it has joined #zig
kristoff_it has quit [Ping timeout: 245 seconds]
<daurnimator>
shachaf: they're not coroutines any more. they're "async functions"; where that means that they return at a time other than when they are called
<daurnimator>
shachaf: but yes. the tricky thing (sometimes) though is that you need to resume the frame from the innermost suspend; not the outer one
<shachaf>
What's the distinction?
<shachaf>
I agree that that's a tricky thing in unifying these two (mostly very similar) use cases.
<daurnimator>
shachaf: I think the answer was: "coroutine" comes with too much baggage; "async function" describes it better
<shachaf>
I think it comes with much less baggage? It's just a statement about control flow, whereas "async" suggests a much more specific use.
<shachaf>
Anyway names aren't very important.
<fengb>
Coroutine can be like goroutines / green threads
<daurnimator>
I believe that we'll barely see 'async' in user code
<daurnimator>
user code will interact at a much higher lever
<daurnimator>
(thanks #1778!)
<shachaf>
That maybe makes sense if you're talking about asynchronous I/O, but probably not for producer-consumer-style coroutines, for example. If that sort of use is supported.
<daurnimator>
Even then I'm not sure. It might be wrapped up behind some API
<shachaf>
There are all sorts of interesting use cases for semi-coroutiney things.
<daurnimator>
I think for loops over generators will be a pretty big thing for zig
<shachaf>
For example, in C, if something like snprintf fails because the buffer is too small, you normally restart from the beginning with a larger buffer.
<shachaf>
But it could be interesting if it just wrote what it can and suspended its state so you could e.g. flush the output it generated and resume it with more space.
<Tetralux>
"with more space" - Isn't that the same thing?
<shachaf>
No, because it doesn't have to restart and do the work from the beginning.
<shachaf>
(In particular you could reasonably use a fixed size buffer for arbitrary size output.)
<daurnimator>
I'd be interested if someone could write up an iterator prototype with async functions
<daurnimator>
start with something simple like a Range() iterator
<fengb>
andrewrk mentioned he’s open to standardizing generators
<fengb>
And for loops over generators!
<daurnimator>
will something like this be possible?: `fn range(it: Iterator, from: usize, to: usize) void { var i=from; while (i < to) : (i+=1) { it.yield(i); } }; fn Range(from: usize, to: usize) Iterator { return Iterator.new(range, from, to); }`
<daurnimator>
shachaf: ^ notice the lack of `async` keyword: I think it would all be internal to the Iterator implementation
<fengb>
I don’t think a proposal was written up yet, but he did mention that generators = multiple yielded values, which is somewhat different from async
<fengb>
And it’d facilitate things like async generators
<fengb>
Maybe I’m getting too ahead of myself here :P
<shachaf>
daurnimator: I'm not sure I see what Iterator is here.
<daurnimator>
shachaf: a hyptothesized standard library piece that would let you make/use iterators easily...
<shachaf>
What does this get compiled into?
<shachaf>
I mean, hypothetically.
<daurnimator>
`it.yield(T)` would store the argument in the iterator object and call `suspend`. to use the iterator you'd call some `next` function that `await`-ed and then fetched the value out as stored by `yield`
<shachaf>
I see, so the idea is that to yield a value you store it in the Iterator struct.
<daurnimator>
something like that. I'm writing this down for the first time now
<fengb>
Hmm I’ve never thought about implementing generators using async before