ChanServ changed the topic of #zig to: zig programming language | https://ziglang.org | be excellent to each other | channel logs: https://irclog.whitequark.org/zig/
<geemili> Is there any documentation on the `build.zig` format?
<daurnimator> no
<geemili> Ah, okay
fsateler has quit [Quit: ZNC 1.7.2+deb2 - https://znc.in]
fsateler has joined #zig
wilsonk has joined #zig
tgschultz has joined #zig
marler8997 has joined #zig
curtisf has joined #zig
<andrewrk> daurnimator, here's a sense of progress on the result location branch: https://github.com/ziglang/zig/commit/ca0988e1d01bd121100590fb97f8cd9dde15b7d8
<scientes> nice
<andrewrk> so now it's matter of working through those and getting them all passing again, then doing the same for std lib tests
<andrewrk> fengb, in place merge sorting has the option to give an allocator and it will allocate N/2 (or sqrt(N) for some impls) memory for cache. then perf should be roughly the same as a stack based solution
<curtisf> about how long does it take to run all of zig's tests?
<andrewrk> a couple hours if you do all of them including the release mode builds
<andrewrk> about 15 minutes if you do -Dskip-release
<andrewrk> depending on what you actually changed there is probably a much faster subset of tests to run
<andrewrk> geemili, here's the issue to subscribe to to get notified when there are docs for zig build system: https://github.com/ziglang/zig/issues/1519
<curtisf> what's in the release mode builds that makes it so much slower?
<andrewrk> LLVM optimizations
<curtisf> ah, does -Dskip-release run the same tests just without optimizing?
<andrewrk> yes. it's a test matrix and so there are considerably fewer cells in the table with -Dskip-release
<andrewrk> it's [libc, nolibc] * [debug, release-fast, release-small, release-safe] * [macos, windows, linux] * [multithreaded, single-threaded]
<andrewrk> plus some other miscellaneous stuff
<andrewrk> I really should disable non-debug builds for compile errors, that's rather pointless
curtisf has quit [Quit: Page closed]
<scientes> you can also do test-behavior, et cetera
<scientes> to just run specific tests
<marler8997> looks like `zig run` is passing the executable twice to the generated exe...
<marler8997> is this one known?
<andrewrk> marler8997, are you sure it's a problem? that's how execve works
<marler8997> yeah definitely happening
<andrewrk> oh is this a windows/posix issue maybe?
<marler8997> Well, I know that CreateProcess and execve work differently
<marler8997> execve passes the program to the first arg, and the array
<marler8997> but CreateProcess doesn't
<marler8997> I'm guessing that however `zig run` creates the command-line, it's adding the exe path to the "args" parameter as well, like you should for execve
<marler8997> lpApplicationName and lpCommandLine
<scientes> in Linux you can query both
<marler8997> basically, lpApplicationName shouldn't be included with lpCommandLine
<marler8997> whereas with execve, you do pass the exe path to both
<scientes> the path it was invoked under is in the auxv, and the first arg is in ARGV[0]
<marler8997> yeah not talking about how to access it
<marler8997> talking about the correct way to call CreateProcess
<marler8997> basically it looks like zig is calling CreateProcess the same way it calls execve, which is incorrect
<scientes> well I am asking what
<scientes> LPCSTR lpApplicationName,
<scientes> is
<scientes> LPSTR lpCommandLine,
<marler8997> I'm guessing the fix is to child_process.zig in the windowsCreateCommandLine function?
<scientes> which is the path it was invoked under, and which is ARGV[0]?
<marler8997> lpApplicationName will get passed to ARGV[0]
<scientes> and command line starts with ARGV[1]?
<marler8997> correct
<marler8997> at least that's the convention
<marler8997> that's how windows' cruntime works
<marler8997> basically, they use GetCommandLine to get the arguments for argv[1]...
<scientes> marler8997, if you can prepare a patch that would be best
<scientes> as you could test it
<marler8997> I can create a patch
<marler8997> but I haven't built the compiler yet
<marler8997> so I can't really test it
<marler8997> I'll see if I can build the compiler though, at some point I'll need to anyway
<marler8997> It's been a few years since I've built it, hopefully it's easier than it was :)
<scientes> well as long as the test suite tests it it should be good
<scientes> ... and I don't see any tests
<daurnimator> andrewrk: getting closer :) seems like you've had to work around a few things in e.g. bootstrap. Will you clean up with a rebase?
marler8997 has quit [Quit: Page closed]
marler8997 has joined #zig
<marler8997> hey, looks like I was actually able to build zig on windows...having the pre-built llvm really helped
Zaab1t has joined #zig
Zaab1t has quit [Client Quit]
marmotini_ has joined #zig
jjido has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
kristoff_it has joined #zig
very-mediocre has joined #zig
vexu has joined #zig
marijnfs has joined #zig
kristoff_it has quit [Ping timeout: 245 seconds]
neceve has joined #zig
_whitelogger has joined #zig
kristoff_it has joined #zig
st4ll1 has quit [Quit: WeeChat 2.5]
marler8997 has quit [Remote host closed the connection]
porky11 has joined #zig
utzig has joined #zig
lunamn_ has joined #zig
hio has joined #zig
cloudhop has joined #zig
lunamn has quit [Ping timeout: 245 seconds]
vexu has quit [Quit: WeeChat 2.5]
<SamTebbs33> I love Zig more and more every day.
<SamTebbs33> When debugging some of my kernel code written in Zig I get an index out of bounds error that I can easily trace and fix.
<SamTebbs33> With C that would have just happened silently in the background without me ever knowing.
marmotini_ has quit [Ping timeout: 268 seconds]
cameris has joined #zig
porky11 has quit [Quit: Leaving]
very-mediocre has quit [Ping timeout: 256 seconds]
vexu has joined #zig
cloudhop has quit [Ping timeout: 256 seconds]
<daurnimator> How can I cast an array?
<SamTebbs33> `var arr1 = u8{x} ** N; var arr2 = f32{0} ** N; for (arr1) |x, i| arr2[i] = @intCast(f32, x);`
<SamTebbs33> jk
lunamn_ has quit [Quit: leaving]
cameris has quit [Quit: leaving]
hio has quit [Quit: Connection closed for inactivity]
Akuli has joined #zig
<daurnimator> hrm.... how can I cast a slice to a pointer to an array?
<andrewrk> daurnimator, nah I don't rebase once a branch gets big like this, it's pointless
<daurnimator> error: expected type '*[8]u8', found '[]align(4) u8'
<daurnimator> andrewrk: ^ I know the slice is at least 8 bytes.... how can I cast the slice to an array pointer?
<BitPuffin> is the use keyword undocumented?
<marijnfs> daurnimator: .ptr()
<daurnimator> BitPuffin: was renamed to 'usingnamespace'
<BitPuffin> daurnimator: also undocumented? :)
<BitPuffin> but nice
<BitPuffin> I didn't know about it
<BitPuffin> saved me a lot of boilerplate
<daurnimator> BitPuffin: at least its in the grammar ;)
<BitPuffin> yeah haha
avoidr has joined #zig
neceve has quit [Ping timeout: 258 seconds]
neceve has joined #zig
porky11 has joined #zig
neceve has quit [Ping timeout: 268 seconds]
neceve_ has joined #zig
hio has joined #zig
<BitPuffin> andrewrk: are you still considering removing it or is it ok for me to use it
<andrewrk> BitPuffin, it's stable now
cameris has joined #zig
<BitPuffin> oh nice!
<BitPuffin> Can I add a comment about that on the issue? Just in case someone reading the issue is deterred from using it
<andrewrk> I do plan to put some deterrents in the documentation for it
<andrewrk> current best practice is to use 1 layer of namespacing, so you can see where something comes from in the file
<BitPuffin> I agree
<andrewrk> but `usingnamespace` has some important use cases
<BitPuffin> with that decision
<BitPuffin> I generally think the explicitness of assigning my imported symbols is probably a good thing even though it can be cumbersome
<andrewrk> for example in the standard library it's used to organize cross platform abstractions
<BitPuffin> but when it comes to importing a C header with hundreds of definitions it's kind of not nice, I did it with keyboard macros in the file I linked above
<andrewrk> so then usage will look like c.foo()
<BitPuffin> that's really nice
<BitPuffin> in my case there was some things that the cInclude couldn't handle
<BitPuffin> so I had to define them myself, and then somehow merge them into the same zig namespace
<andrewrk> ah yes, so there's the use case for it then
<cameris> Hi, given a `comptime T: type` and a function with parameter `a: T`. How would I cast `a` into a `[]const u8`?
<BitPuffin> in this case it was these ones if you're curious
<BitPuffin> that's the actual macro
<andrewrk> cameris, if I understand correctly, my reccomendation would be to have T be the element type. this is what std.mem functions do: https://github.com/ziglang/zig/blob/9e8db5b7505a6c2e601c8a94b9d8a4282b5df184/std/mem.zig#L225
<andrewrk> BitPuffin, it's theoretically possible for zig to be able to translate those macros, but it would be pretty advanced
<BitPuffin> does the compiler output any information that could be useful for IDEs?
<BitPuffin> yeah
<BitPuffin> I know it only supports a subset of the pre-proc for simplicity
<andrewrk> I'm not aware of anything like that currently. IDEs are considered important but not a problem I've tackled yet. big plans for the self hosted compiler with IDEs
<andrewrk> self hosted compiler is designed from the ground up to be a long running process providing deep introspection into the compilation
<cameris> andrewrk, but std.mem.toBytes and asBytes do not work for slices.
<andrewrk> cameris, I'd be happy to make more suggestions if I could see some code from your use case
<andrewrk> right now though I'm going to close IRC and make some progress on my result location branch
<donpdonp> https://langserver.org/ might be worth keeping in mind, a standard for an ide and a compiler to pass info back and forth
<cameris> but thats the whole thing. But all the occurences of std.mem.toBytes should be replaced by a function that return `[]const u8`
<tgschultz> it is deliberate that toBytes and asBytes don't work for slices
<cameris> tgschultz, i think so, because somewhere down the function AsBytesReturnType checks for `trait.isSingleItemPtr`
SamTebbs33 has quit [Quit: leaving]
<tgschultz> I don't understand your use case. Given `a` of known type T, you want a slice of the bytes of `a, right? assuming T is a pointer then `std.mem.asBytes(a)` will give you a `*[@sizeOf(T)]u8` which can be sliced.
<cameris> yeah, but T could also be a primitive type.
<tgschultz> then you have to branch on that: `const my_slice = if(comptime std.meta.trait.isPointer(T)) std.mem.asBytes(a)[0..] else std.mem.asBytes(&a)[0..];`or something.
<tgschultz> if T is a slice, then @sliceToBytes(a) is what you want.
<tgschultz> well, probably
<tgschultz> the trait fucntion may be `isPtr` I don't remembeer.
<cameris> also the toBytes does return a copy of the given primitive type, not a pointer/slice
<tgschultz> yes, that's the difference between `asBytes` and `toBytes`
<cameris> thats why I thought i would cast primitves with `@ptrCast([*]const u8, &a)[0..@sizeOf(T)]`
<Sahnvour> andrewrk: saw this talk in my feed the other day, only watchde the beginning for now but might be of interest https://www.youtube.com/watch?v=N6b44kMS6OM
<Sahnvour> the parallel with an ECS particularly resonnates with me, and I think there might be a lot to draw from the data-oriented approach for stage2 ... just didn't thought this through yet (and I have very very little experience actually implementing compilers so...)
marijnfs_ has joined #zig
kristoff_it has quit [Ping timeout: 246 seconds]
heitzmann has quit [Quit: WeeChat 2.5]
heitzmann has joined #zig
wilsonk has quit [Ping timeout: 258 seconds]
wilsonk has joined #zig
vexu has quit [Ping timeout: 245 seconds]
vexu has joined #zig
neceve_ has quit [Read error: Connection reset by peer]
kristoff_it has joined #zig
neceve has joined #zig
kristoff_it has quit [Ping timeout: 272 seconds]
vexu has quit [Ping timeout: 248 seconds]
vexu has joined #zig
SeedOfOnan has joined #zig
very-mediocre has joined #zig
neceve has quit [Remote host closed the connection]
vexu has quit [Quit: WeeChat 2.5]
neceve has joined #zig
kristoff_it has joined #zig
jjido has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<SeedOfOnan> I'm trying to add support for cross target cpu features. I thought it just needed a small addition to codegen.cpp at the call to LLVMCreateTargetMachine. I tried hard-coding there "cortex-m4" and "+no-movt", but that didn't work. Can anyone give me some ideas?
<andrewrk> Sahnvour, nice, I actually already have this bookmarked to watch
<andrewrk> SeedOfOnan, as a last resort you can read the clang source code
<scientes> Sahnvour, interesting, that is what zig has already wanted to do
<scientes> the old model was becuase ram was scarse
<scientes> it was designed for punch cards
jjido has joined #zig
kristoff_it has quit [Ping timeout: 245 seconds]
urluck has quit [Ping timeout: 250 seconds]
urluck has joined #zig
<cameris> tgschultz, thx. This pointed me in the right direction.
<scientes> do GPUs have simd?
<scientes> oh yes they do
Akuli has quit [Quit: Leaving]
Ichorio has joined #zig
marijnfs__ has joined #zig
<marijnfs__> i got a Risc V board that has a c-like toolchain
<marijnfs__> is riscv targetable with zig? I think there are clang branches that have it
<scientes> marijnfs__, it is
<marijnfs__> hmm ill try to have a look, would be cool
<marijnfs__> i was thinking of a timsort that doesn't need an allocator
<marijnfs__> I think it's mostly possible
lunamn has joined #zig
<Sahnvour> scientes: they kind of are simd
<scientes> yeah but do they have simd instructions too
<scientes> how big are the registers
kristoff_it has joined #zig
<scientes> and how much memory do you have in the __local and __private sections?
<scientes> it is amazingly difficult just to get an overview of these things
<Sahnvour> that I do not know specifically but I believe there should be good resources, depending on the architectures
<scientes> probably AMDGPU as there is llvm support for that
<Sahnvour> amd GCN is pretty well documented adn understood
<Sahnvour> and*
<scientes> GCN?
<scientes> and what is even the target to pass to clang?
<scientes> I want to see something on godbolt.org
<Sahnvour> that's the architecture family of AMD GPUs until recently, for example used in the ps4/xone
kristoff_it has quit [Ping timeout: 246 seconds]
<Sahnvour> andrewrk: nice, by always wanted to do you are referring the low latency compiling for use in IDEs and such yeah ?
<Sahnvour> oops sorry, wrong tag ... I meant scientes
<scientes> well its andrew that has been leading that
<scientes> its his idea
<scientes> I really want to get to cross-platform simd programming with zig
<scientes> because C is not delivering that
<scientes> IBM is even trying to support the x86 API on PowerPC
<scientes> (a subset of course)
marijnfs__ has quit [Ping timeout: 272 seconds]
very-mediocre has quit [Ping timeout: 256 seconds]
vexu has joined #zig
neceve has quit [Read error: Connection reset by peer]
porky11 has quit [Ping timeout: 250 seconds]
<andrewrk> woo, all the array tests passing in result location branch
<jjido> what is "result location" branch?
<andrewrk> jjido, it's https://github.com/ziglang/zig/pull/2602. I'll do a quick demo of it on the stream in 5 minutes
marijnfs__ has joined #zig
<andrewrk> which is starting now https://www.twitch.tv/andrewrok/
<jjido> not too interested in that sorry. Copy or no copy, same semantics (hopefully)
<marijnfs__> sweet
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jjido has joined #zig
vexu has quit [Quit: WeeChat 2.5]
redj has quit [Read error: Connection reset by peer]
<mikdusan> not to detract from twitch stream so I'll put it here; the math seemed off for 64-bit address space exhaustion
<mikdusan> 96 GiB/5mins = 96 * 2^30 / (5 * 60) = 343597383 bytes/s (327 MiB/s)
<mikdusan> 343597383 * (365 * 24 * 60 * 60) = 10835687070288000 bytes/year (9.6 PiB/year)
<mikdusan> 2^64 / 10835687070288000 = 1702 years until 64-bit address space exhaustion
<cameris> was the purpose to reach the virtual memory limit?
<cameris> some shells, e.g. bash, provide the `ulimit -v` to set a limit for the shell and therefore its children.
<mikdusan> i wasn't following too close but it had to do with mmap and what if you don't go thru trouble of munmap
<mikdusan> (in reference to a (general purpose?) debug allocator
cameris has quit [Quit: leaving]
<squeek502> mikdusan, iirc only 48 bits of the 64 bits are actually usable
<mikdusan> right. also related if i'm not mistaken, new sunnycove microarch from intel is bumping to 57 (yes 57, not 56) bits.
kristoff_it has joined #zig
<mikdusan> squeek502: and good that you brought it up. if my math is true, 48-bits would be exhausted in 9.5 days?
kristoff_it has quit [Ping timeout: 248 seconds]
ltriant has joined #zig
Ichorio has quit [Ping timeout: 250 seconds]
<andrewrk> yeah I think we need a plan for address space exhaustion
<scientes> mmap on Linux won't give you addresses > 48 bits unless you pass a flag asking for them
<scientes> which is fine
<andrewrk> why would there be a limit on virtual memory? :( that's unreasonable
<andrewrk> my system says "unlimited". I wonder what the defaults are on other systems
<scientes> there is a big problem where lots of graphics cards only implement 48-bit DMA, so the memory has to be mapped flat, which isn't the case on PowerPC
<scientes> wut, ulimit on virtual memory, that makes no sense
<andrewrk> yeah that's what I'm saying
<scientes> maybe a limit on the number of mappings
<scientes> but not the amount of VM
<andrewrk> sure that's reasonable
<scientes> or on the number of dirty pages
<andrewrk> hmm I just realized something potentially problematic
<andrewrk> if you accidentally get mappings close together, you might be using less mappings than you expect, and so then when doing unmap later it might fail unexpectedly
<andrewrk> I think it was a mistake to design mmap such that unmapping can fail if you split a vm page. it might be actually impossible to design stable software with that interface
<scientes> munmap can handle that
<scientes> i thought...
<andrewrk> yeah the problem is that unmap can actually be resource *allocation* when it should be *deallocation*
<andrewrk> who in their right mind thought it would be a good idea to make a function simultaneously do (fallible) resource allocation and resource deallocation in the same call?
<scientes> if it guarantees SIGSEGV, sure
<scientes> yes wasm felt that mmap() was a bad interface
<andrewrk> I actually think windows gets it right
<andrewrk> with VirtualAlloc. it's basically mmap/munmap except VirtualFree doesn't let you split pages and it can't fail.
<scientes> oh I see, yeah splitting pages is stupid
<andrewrk> the debug allocator won't be safe for the wasm target
<scientes> however PowerPC has multiple page sizes, so it can still do that....
<andrewrk> this trick doesn't work for "sbrk" backing APIs
klltkr_ has joined #zig
<andrewrk> handling address space exhaustion will be an interesting problem. because currently it's "fire and forget" - address space is permanently leaked on free
klltkr_ has quit [Ping timeout: 258 seconds]
<mikdusan> it sounds like munmap return EINVAL is going to have to be ignored since mmap *may* merge
<andrewrk> I think that's right
<scientes> I don't get it
<andrewrk> but not EINVAL, it's ENOMEM
<scientes> i though you could jsut munmap anything, as long as it is page aligned?
<andrewrk> "ENOMEM The process's maximum number of mappings would have been exceeded. This error can also occur for munmap(), when unmapping a region in the middle of an existing mapping, since this results in two smaller mappings on either side of the region being unmapped."
<scientes> oh ENOMEM yes i see
<scientes> yeah you just ignore it
<scientes> simple
<scientes> but there should be a flag so that the kernel doesn't even try to unmap it
<andrewrk> if it ever gets ignored it's a leak though
<scientes> what if you try to unmap mapped memory?
<scientes> that that fail too?
<scientes> *unmap unmapped memory
<andrewrk> basically every posix system based on mmap/munmap can leak memory under rare conditions
<scientes> this could be fixed by allowing unmapping unmaped memory
<andrewrk> how?
<mikdusan> unmapping unmapped memory is NOT an error as indicated by manpage
<scientes> you just later unmap the whole mapping
<scientes> and that won't fail with ENOMEM
<scientes> you have to store a pointer and length for every mapping you ever make
<andrewrk> the problem is that when you unmap something, you're clearly going to get rid of that pointer/length for it, because it should be gone now
<scientes> (or you could even ask proc from that info)
<andrewrk> this is what I'm talking about - they've mixed together a function that is supposed to deallocate a resource, with a function that allocates a resource
<scientes> andrewrk, yeah but later you can unmap the whole thing, instead of that segment
<scientes> it should also work if you always unmap the same size that you allocate
<andrewrk> what whole thing? that's my point about if you "accidentally" mmap 2 independent things next to each other
<mikdusan> yeah i found some link that says glibc workaround is to use mmap for max 65535 mappings, then they pivot to using sbrk.
<andrewrk> if the kernel says oh look at this I'm merging these vm pages together now, then it incorrectly gives you an extra mapping that you shouldn't have, and then an unmap will fail and you can't do anything about it
<scientes> ewwwwwwwwwwwww
<scientes> you could also just space mappings out
<andrewrk> how?
<scientes> that would work on 64-bit systems
<scientes> oh you would have to pass addresses
<scientes> ugghhhh
<scientes> you have so much friggen address space it might actually work however
<andrewrk> it will be ok if the kernel devs were smart enough to not get too clever and keep vm mappings separate if they came from separate mmap calls
<mikdusan> scientes: but you'd have to know how much space guarantees no merging
<scientes> just start at 0x4200000000, and each mapping is 2^32 bytes apart
<andrewrk> in this case the caller can be guaranteed munmap will fail if they always remove full mappings
<andrewrk> *will not fail*
<scientes> or other horrible hacks.....
<mikdusan> hey kernel, create me 1 file. then another. merge them but don't tell me about it. thanks!
<andrewrk> I mean that's basically what overcommit is
<scientes> ....that is a whole other can of worms
<andrewrk> the kernel tells you you have more memory than you have, and then yanks the rug out from under you when you try to use it
<scientes> but its really hard to undo that decision
<scientes> after the fact
<andrewrk> agreed
<scientes> VMs these days are also overcommited on memory
<scientes> and then they migrate when things get tight