ChanServ changed the topic of #zig to: zig programming language | https://ziglang.org | be excellent to each other | channel logs: https://irclog.whitequark.org/zig/
tiehuis has joined #zig
<tiehuis> ganderzz: you are returning a stack local variable 'buffer' from that clean function. I would suggest instead taking a slice of bytes as a parameter which the function can write to
<schme245> how do I allocate a struct on the heap?
<andrewrk> schme245, you need an allocator
<schme245> and then `allocator.create`?
<andrewrk> the best place to get one depends on what kind of application you are making. if you're linking libc, you probably want std.heap.c_allocator. if you're making a command line application, I recommend the example set by std/special/build_runner.zig. if you're making a library, you should probably accept an allocator as an argument
<andrewrk> allocator.create/destroy is for a single-item pointer; allocator.alloc/free is for slices
<schme245> gotcha, I'm familiar with allocators but I've only been using them with slices so far, thanks
schme245 has quit [Remote host closed the connection]
schme245 has joined #zig
schme245 has quit [Remote host closed the connection]
gamester has joined #zig
schme245 has joined #zig
schme245 has quit [Remote host closed the connection]
jevinskie has joined #zig
darithorn has quit [Remote host closed the connection]
<mikdusan> given struct ‘Foo’ that has field ‘code: [10]u8,’ how to get the size of ‘code’? i can hardcode a const def in the struct, but is there something convenient like @sizeOf(Foo.code)?
<andrewrk> code.len
<andrewrk> that's if you have an instance
<mikdusan> i need it while initilizing the struct
<mikdusan> so: .code = []u8{0} ** __SIZE_OF(Foo field code)
<andrewrk> I recommend a def, but if you don't want to do that you can use @sizeOf(std.meta.fieldInfo(Foo, "code").field_type)
<mikdusan> 👍
Ichorio has quit [Ping timeout: 246 seconds]
darithorn has joined #zig
ganderzz has joined #zig
<ganderzz> tiehuis: Shoot, I'll look into that. Thanks!
ganderzz has quit [Client Quit]
fsateler has quit [Quit: ZNC 1.7.2+deb1 - https://znc.in]
fsateler has joined #zig
darithorn has quit [Remote host closed the connection]
<andrewrk> alright that merge should fix the CI of master branch
<mikdusan> re: last commit comment; I honestly read “ziglang/caching” as “ziglang ka-ching!”
<andrewrk> zig build has full caching now
<andrewrk> well, at least for zig objects, libraries, and executables
schme245 has joined #zig
<emekankurumeh[m]> nice!
very-mediocre has joined #zig
schme245 has quit [Ping timeout: 255 seconds]
<emekankurumeh[m]> andrewrk builds on mingw are broken, i'm getting linker errors about undefined references to symbols from z3.
jjido has joined #zig
tiehuis has quit [Quit: Bye]
<emekankurumeh[m]> nevermind, i fixed it.
<emekankurumeh[m]> andrewrk: does windows even have the concept of dynamic linkers?
very-mediocre has quit [Ping timeout: 256 seconds]
<dewf> do you mean something other than linking against or runtime loading a DLL?
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Ichorio has joined #zig
jjido has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jjido has joined #zig
aiwakura has quit [Quit: Ping timeout (120 seconds)]
aiwakura has joined #zig
Zaab1t has joined #zig
Amsel has joined #zig
jjido has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Amsel has quit [Quit: Page closed]
dewf has quit [Quit: Leaving]
_whitelogger has joined #zig
Ichorio_ has joined #zig
Ichorio has quit [Ping timeout: 252 seconds]
very-mediocre has joined #zig
Dennis__ has joined #zig
very-mediocre has quit [Ping timeout: 256 seconds]
schme245 has joined #zig
<andrewrk> emekankurumeh[m], maybe in the context of mingw/cygwin? I'm not sure
develonepi3 has quit [Quit: Leaving]
schme245 has quit [Remote host closed the connection]
darithorn has joined #zig
<kristate_> andrewrk: did you see the man page for macos re: mtime?
<kristate_> andrewrk: Variants which make use of the newer struc- tures have their symbols suffixed with $INODE64. These $INODE64 suf- fixes are automatically appended by the compiler tool-chain and should not be used directly.
<kristate_> I will take a look at the new caching implementation,
<andrewrk> we're already doing the inode64 thing in os.cpp
<andrewrk> it happens automatically in C
<kristate_> yeah, this is stage1
<kristate_> did some tests, seems like HFS partitions are rounding to nearest second -- APFS is working
<mikdusan> andrewrk: re: recent commit, macos HFS+ has a mtime resolution of only 1 second. the newer APFS apparently has 1 ns resolution
<kristate_> andrewrk: shouldn't we check hashes if mtime is the same?
<andrewrk> kristate_, the whole point of mtime is to avoid the cost of hashing the file
<kristate_> andrewrk: right -- if mtime is the same, then we could revert to hasing
<kristate_> hashing
<andrewrk> that doesn't avoid the cost of hashing the file on cache hit
<kristate_> andrewrk: ah, to clarify, we could do it only if tv_nsec was 0
<andrewrk> that's an interesting idea
<kristate_> do you want to write the patch or should I? :-)
mikdusan has left #zig [#zig]
<andrewrk> I want to consider it more before a patch
<andrewrk> it wouldn't be strictly an improvement - there are pros and cons
<kristate_> andrewrk: I think it is an improvement where it comes to correctness.
<kristate_> sure, we have a speed penalty, but right now as it stands, the caching sub-system is incorrect on macos for HFS partitions
<andrewrk> yes exactly, and it is worse in terms of performance for the use case where source files will be modified by users
<andrewrk> even nanoseconds is technically incorrect
<andrewrk> "strictly an improvement" means it is categorically better in all use cases
<andrewrk> when something is strictly an improvement, a patch is always welcome. otherwise, sometimes, more deliberation is required
<kristate_> andrewrk: well, in that case we should remove mtime.
<kristate_> Also, I am wondering what your list of pros/cons are?
scientes has joined #zig
Akuli has joined #zig
return0e_ has quit [Remote host closed the connection]
return0e has joined #zig
<emekankurumeh[m]> andrewrk, I don't think so. I've never heard of a dynamic linker for mingw.
Dennis__ has quit [Ping timeout: 256 seconds]
schme245 has joined #zig
mikdusan has joined #zig
<Akuli> how do i loop over a HashMap so that i destroy each keyvalue pair in the loop? the hash function must not be called on destroyed keys
develonepi3 has joined #zig
<Akuli> i'm thinking of: while (hm.iterator.next()) |kv| { _ = try hm.remove(kv.key); destroy the keyvalue pair; }
<Akuli> but that seems to be quite inefficient, looking at the implementation of remove()
darithorn has quit [Remote host closed the connection]
companion_cube has joined #zig
jevinskie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Zaab1t has quit [Quit: bye bye friends]
darithorn has joined #zig
scientes has quit [Ping timeout: 244 seconds]
scientes has joined #zig
scientes has quit [Remote host closed the connection]
scientes has joined #zig
scientes has quit [Ping timeout: 240 seconds]
scientes has joined #zig
marmotini_ has joined #zig
nore has quit [Ping timeout: 252 seconds]
affinespaces has quit [Ping timeout: 252 seconds]
affinespaces has joined #zig
schme245 has quit [Remote host closed the connection]
nore has joined #zig
schme245 has joined #zig
schme245 has quit [Remote host closed the connection]
jjido has joined #zig
marmotini_ has quit [Ping timeout: 250 seconds]
shawn_ has joined #zig
scientes has quit [Ping timeout: 245 seconds]
develonepi3 has quit [Remote host closed the connection]
develonepi3 has joined #zig
shawn_ is now known as scientes
Akuli has quit [Remote host closed the connection]
dewf has joined #zig
schme245 has joined #zig
schme245 has quit [Remote host closed the connection]
schme245 has joined #zig
schme245 has quit [Remote host closed the connection]
darithorn has quit [Remote host closed the connection]
redj has joined #zig
<andrewrk> I think I can make the caching perfect, even when the file system only has resolution down to seconds, without giving up skipping the expensive hash in the common cases
<andrewrk> after fstating a file and collecting its mtime, also look at the clock time, rounded down to the same resolution as the mtime. if they match, this file is too "fresh" and the mtime cannot be relied on - a future write could have the same mtime. it will have to be hashed next time as well as this time.
wootehfoot has joined #zig
schme245 has joined #zig
Ichorio_ has quit [Ping timeout: 244 seconds]
shawn_ has joined #zig
scientes has quit [Ping timeout: 244 seconds]
moo has joined #zig
wootehfoot has quit [Ping timeout: 246 seconds]
<emekankurumeh[m]> i don't know if it's zig's fault or lld's fault but lld is segfault whenever zig tries to link executables.
schme245 has quit [Remote host closed the connection]