ChanServ changed the topic of #zig to: zig programming language | https://ziglang.org | be excellent to each other | channel logs: https://irclog.whitequark.org/zig/
_Vi has quit [Ping timeout: 256 seconds]
___ has joined #zig
___ has quit [Client Quit]
___ has joined #zig
___ has quit [Client Quit]
___ has joined #zig
metaleap has quit [Ping timeout: 265 seconds]
___ has quit [Client Quit]
___ has joined #zig
___ has quit [Client Quit]
metaleap has joined #zig
geemili6 has joined #zig
metaleap has quit [Quit: Leaving]
geemili6 has quit [Remote host closed the connection]
seoushi has quit [Quit: WeeChat 2.7]
<pixelherodev> Up to line 1267 (from ~850 last night), and rapidly counting :)
<pixelherodev> Sorry, did I say 1267? I meant *3336*!
<pixelherodev> Can now lex ~5% of its own IR :D
<pixelherodev> Okay, I'm going to just stop bragging here, because it's up to 14,750 and I think at this point I should just wait ~10 minutes and then brag that it's fully functional :)
_whitelogger has joined #zig
<jaredmm> It has been more than 10 minutes.
dddddd has quit [Ping timeout: 256 seconds]
waleee-cl has quit [Quit: Connection closed for inactivity]
<pixelherodev> And it's now up to 44,097
<pixelherodev> Out of ~60K
<andrewrk> what is this?
<pixelherodev> LLVM lexer :)
<andrewrk> are you self-hosting something?
<pixelherodev> Testing it on itself
<pixelherodev> Via `zig build-exe --emit llvm-ir`
<pixelherodev> Then running `zig build run ./test.ll`
<pixelherodev> For now, I only plan on targeting a toy architecture with it
<pixelherodev> But eventually, the goal is to add amd64/ARM/RISC-V/etc support
<andrewrk> that's quite a project!
<pixelherodev> Yeah
<pixelherodev> First time, I was able to get hello world working
<pixelherodev> But it was exceedingly fragile and unmaintainable
<pixelherodev> So I redesigned it, wrote the test program that uses the API and had all API functions generate errors, and then started implementing
<pixelherodev> Just about done with the lexer
<wilsonk> Hello all, just watching the stream and there was a question asked by veggiecode about the profit margin on the merch sold via the Zig teespring shop. There is approximately a %50 profit on each item sold (so $10 per $20 sold is profit for Andrew). Just thought I would let everybody know. If anyone knows veggiecode and can get them this answer, that would be great. Also, if anyone else has any questions you can certainly DM me here. Happy
<pixelherodev> Also, this version is tested on *itself* instead of a hello world program
<pixelherodev> Which *forces* proper lexing on a scale that a hello world doesn't
<pixelherodev> I also plan on testing it on at least five random Zig projects from GitHub before moving on to the backend
decentpenguin has joined #zig
decentpenguin has quit [Client Quit]
<pixelherodev> The *only* remaining "hack" is that it assumes any string matching `DW_` (after testing all other known token types) that contains only alphanumerics and underscores is a valid token for metadata
<pixelherodev> And long term it's tedious but simple to remove that and insert all the actual values
<wilsonk> pixelherodev: is that on github?
<pixelherodev> Not yet
<pixelherodev> The original version is on SourceHut
<pixelherodev> This one will be once the lexer's done
<wilsonk> ah, ok cool
ur5us has joined #zig
<pixelherodev> Okay, *now* the lexer is fully working
<pixelherodev> Closer to sixty minutes than ten, but still, Got the support up from 800 lines to >60K in a relatively short time
<wilsonk> nice
<pixelherodev> Of course, now I need another refactor to fix a major performance issue caused by allocating memory for *every single token*'s source string instead of just storing indices into the original file
<pixelherodev> After regenerating its IR after tonight's work, it's IR has grown from ~60K lines to ~90K lines, but no changes were needed for lexing that to succeed :)
seoushi has joined #zig
<pixelherodev> Using a bufferedinstream improves performance by ~70%, from ~6s to ~2s, but I want to get that down to less than 1s before I publish this...
<andrewrk> total shot in the dark, but maybe you can take advantage of SIMD
<pixelherodev> Maybe, but there's some low hanging fruit for now
<pixelherodev> For instance, ~15% of program time is spent reallocating the token ArrayList
<pixelherodev> Starting with a high capacity (~10K) should offset that
<andrewrk> or using less memory in general
<pixelherodev> True, but that wouldn't help enough; it's already pretty low
<pixelherodev> Or, to put it differently, I can't reduce token quantity
<pixelherodev> Though I might be able to reduce token *size*
<pixelherodev> Can probably remove a usize (by replacing line_number + column with index), which will improve performance when given valid input and slow things down when invalid input is received
<pixelherodev> Given that there's 800K tokens, that's definitely worth it
<pixelherodev> That reduces RAM usage by ~3.2MB, but doesn't help with the *frequency* of reallocations
<pixelherodev> ArrayList growth simply isn't meant for this
<pixelherodev> Might submit a PR shortly allowing for comptime tweaking of ArrayList growth
<pixelherodev> e.g. ArrayList(T, .Linear) or ArrayList(T, .Exponential) would be neat
<pixelherodev> Linear would retain the current strategy, providing low RAM usage and good performance for small lists
<pixelherodev> Exponential would, as the name implies, double/triple in size whenever increasing capacity, offering better performance for large lists but with unneeded capacity more often
<seoushi> for my c data structures library all of my things that can grow take a function pointer like fn(currentSize, newMinSize) newSize
<pixelherodev> Yeah, improving the strategy - or more accurately, cheating and initializing with a capacity of 1M tokens - reduces time by around 0.1ms
<seoushi> another difference from my std to zigs is there were two ways to create structs. a standard create which will do a malloc then an init or just an init. So you can pass in a pointer a block of memory so you can layout things how you want for cache improvements etc. Seems like that is missing from the zig std tho I might be overlooking something.
<pixelherodev> Simply rearranging the order in which tokens are tested to account for which tokens are most frequent will probably provide a nice boost also; a laaarge portion of time is wasted comparing against strings which don't match
<pixelherodev> std.mem.eql makes up 25% of program time :P
<seoushi> how are you profiling this btw?
<pixelherodev> valgrind
<pixelherodev> Well, callgrind
<seoushi> haven't messed with callgrind. will check it.
<pixelherodev> Then, `gprof2dot --format=callgrind --output=tmp.dot callgrind.out*;dot -T png tmp.dot -o callgrind.png;rm tmp.dot callgrind.out.* -f;$IMAGE_EDITOR callgrind.png;rm -f callgrind.png`
BaroqueLarouche has quit [Quit: Connection closed for inactivity]
<seoushi> cool.. added that to my notes.
<pixelherodev> Or, now that the image has grown a lot bigger, `kcachegrind callgrind.out.* ;rm callgrind.out.* -f`
<pixelherodev> KCacheGrind is meant for KDE, but it's also the only interactive program I know of for visualizing callgrind output
<seoushi> what about checking for memory leaks? I think I remeber hearing something about a special allocator you can use or something?
<pixelherodev> Normally I use valgrind
<pixelherodev> But valgrind doesn't work with Zig *for me*
<pixelherodev> Might be out of date
<pixelherodev> Well, it's not
<pixelherodev> Might be missing a patch or something though
<seoushi> yeah I usualy use valgrind too but it doesn't work for me.. I think because zig doesn't use malloc in the same way and valgrind just basically replaces malloc
<pixelherodev> Should work with std.heap.c_allocator though
<pixelherodev> ... oh, maybe I should switch to that and see if that helps perf
<pixelherodev> Yep!
<seoushi> nice
<pixelherodev> Brings down time from 2.5 to ~1.9s
<pixelherodev> Given that this is *without* the actual parsing + optimization + codegen phases, that's still far far too large
<seoushi> are you writing your own language then I take it?
<pixelherodev> Nah
<pixelherodev> Hoping to make a zig backend for LIMN and z80
<seoushi> oh ok. that's cool
<pixelherodev> (LIMN is a toy 32-bit CPU, z80 is an 8-bit CPU used in e.g. TI calculators)
<seoushi> I knew of the z80. not lumn tho
<seoushi> limn
<pixelherodev> Not surprised; AFAIK I'm one of two people to actually use it, and the other is the creator :P
<pixelherodev> It's a nice arch for playing around with kernel concepts because it doesn't have the hardware complexity of modern CPUs, but of the two implementations (the original in Lua, and my C interpreter) there's one that's fastish (mine) and one that actually matches the up-to-date spec (not mine) :P
<seoushi> maybe sometime in the future I will make something simular to 0x10c (a game where you program a space ship to do things for you). I wrote a partial compile a long while ago
<pixelherodev> I think this is good enough for the first public release *speed* wise (though I do want to improve on it a lot), but I'm going to clean up the code a bit so I can willingly admit to having written this :)
<seoushi> yeah, it's very easy to get caught up with performance. I have to stop myself and just write backlog tickets otherwise no visable progress will be made hah
<pixelherodev> I mean, I already improved it by about 71% in ~30 minutes, so I'm happy
<pixelherodev> 6.6 -> 1.9 is not bad at all
<seoushi> yeah for sure
<andrewrk> pixelherodev, huh, I thought it was already exponential, giving an amortized O(1) time
<andrewrk> pixelherodev, yes it's already exponential
<andrewrk> 1.5 growth factor
<andrewrk> it *is* meant for that kind of growth
<pixelherodev> No it's not
<seoushi> better_capacity += better_capacity / 2 + 8;
<pixelherodev> It's `self.capacity() / 2 + 8`
<pixelherodev> ... oh wait, `+= `
* pixelherodev facepalms
<andrewrk> that's 1.5x + constant
<pixelherodev> Yeah okay, I see that now
<pixelherodev> That's just still too slow for this because it starts at the default and grows to nearly 1 million
<andrewrk> check that assumption - I'm pretty sure it gets to 1 million in surprisingly small reallocations
<shachaf> Do Zig allocators have an interface like malloc_usable_size that tells you the real allocated size when you request some number of bytes?
<andrewrk> s/small/few/
<seoushi> @align ?
<shachaf> I mean, when you malloc(n), malloc might give you n+k bytes instead because it prefers certain sizes of allocations.
<shachaf> And a thing like a dynamic array can just use that extra space if it's there.
<andrewrk> 1.5x + 8 => over 1 million in 28 steps
<pixelherodev> 28 s - yeah
<andrewrk> 2x => over 1 million in 20 steps
<pixelherodev> Did a quick script to check
<pixelherodev> That's not bad at all
<andrewrk> oh, I did it manually in python repl :)
<pixelherodev> I did the exact same :P
<pixelherodev> I was inspired by your stream earlier
<pixelherodev> `a = 0;b = 0;while (a < 1000000):a = a*1.5 + 8;b = b + 1` then inspect b (but w/ proper formatting and separation ofc)
<andrewrk> shachaf, no but the allocator interface has both "realloc" and "shrink" that have important different properties
<shachaf> i,i length . takeWhile (<=1e6) . iterate (\x -> x*1.5+8) $ 1
<andrewrk> when an array list attempts to grow larger, the allocator can take advantage of its extra bytes. so that's no problem. but for shrinking:
<andrewrk> it calls "realloc" rather than "shrink" which is allowed to fail *even when shrinking*. so ArrayList handles this failure by hanging onto the additional capacity
<pixelherodev> What's the default enum tag?
<andrewrk> I considered this carefully and believe it to fully address the use case you are hinting at, but open minded to suggestions
<pixelherodev> That is, what's the size of an untagged enum?
<andrewrk> 0, same as comptime_int
<shachaf> There are two things you might want to do that realloc doesn't let you do.
<pixelherodev> Wait what?
<pixelherodev> I mean, if I store that enum in a structure
<shachaf> One is ask realloc how much you're allowed to grow without moving the existing allocation.
<pixelherodev> How much space does it require?
<andrewrk> pixelherodev, at runtime, 0 bytes
<pixelherodev> That makes no sense whatsoever
<pixelherodev> Because this is running at runtime, and the value is completely correct...
<shachaf> The other is ask how much "wasted space" is at the end of your allocation which is accounted to it anyway, that you can use even without asking (this is malloc_usable_size in glibc or _msize on Windows).
<andrewrk> an anonymous enum literal must be comptime known. therefore 0 bytes at runtime are needed for it
<andrewrk> pixelherodev, it probably got casted to an actual enum which has a nonzero size
<pixelherodev> ... oookay
<pixelherodev> So if I have `const a = enum{ A, B, C}; const b = struct { field: a };`, what's the size of b?
<andrewrk> sorry I thought you were specifically asking about the anon enum literal type. e.g. `@sizeOf(@TypeOf(.foo))` (try it)
<pixelherodev> Or, rather, the size of field
<pixelherodev> Nah it's fine, I realized the confusion and thus rephrased the question :)
<pixelherodev> Optional bool I'm guessing is two bits (and thus one byte), so size of one token is now 25 + N (where N is the size of that field) bytes
<pixelherodev> My guess is either 26 (minimal possible) or 33 (default to usize)
<andrewrk> optional bool is currently 2 bytes
<pixelherodev> Ah
<pixelherodev> Then size will probably go down from 27MB to 19MB with the next optimization, which is still kind of insane for something this simple...
<andrewrk> one proposal that I haven't made explicit, would be to essentially make the "non_null_bit" field of an optional, align(0)
<andrewrk> that would allow optional bools to be 1 bit
<andrewrk> *1 byte
<pixelherodev> I'm thinking it might be best to *stream* the tokens...
<pixelherodev> Maybe something like making the parser `async`
<pixelherodev> And then consuming tokens on an as-available basis
<pixelherodev> That should reduce peak memory consumption, but it still needs to lex them all, so at the moment it'd still take the same amount of time...
<andrewrk> shachaf, it would be interesting, in the case of ArrayList, to know how much capacity it could take up without requiring a reallocation. but it would be a bit of a feedback loop. e.g. the allocator chooses an allocation strategy based on how much ArrayList wants. ArrayList chooses how much it wants based on how much Allocator has available... at somepoint, somebody has to make a decision
<andrewrk> pixelherodev, you know about @compileLog, @TypeOf, and @sizeOf, yeah?
<andrewrk> and comptime{} top level blocks
<shachaf> andrewrk: You can have a realloc variant that fails if it can't grow in-place.
<pixelherodev> Yeah... but I also know that the current sizes aren't necessarily going to remain (as in the case of `?bool`)
<pixelherodev> Was more interested in what the size is *intended* to be than what it is in the current compiler
<pixelherodev> Thinking a bit more long term here :)
ur5us has quit [Ping timeout: 256 seconds]
<shachaf> Then the ArrayList can request linear growth in-place and exponential growth if it requires moving.
<andrewrk> shachaf, how would ArrayList take advantage of that? in the append() function?
<andrewrk> (or any user of the allocator, nevermind ArrayList)
<andrewrk> ah
<shachaf> Yep, when you need to grow the memory block, you can just try to grow it by a fixed size in-place at first.
<andrewrk> that's a neat idea
<shachaf> You only need to worry about amortizing when you copy memory.
<andrewrk> shachaf, really, what ArrayList wants, is to ask for a "minimum" amount of memory, and possibly be given extra
<pixelherodev> C allocator would have to refuse to give information
<pixelherodev> So maybe have a comptime allocator function that says whether or not it can give you that info?
<pixelherodev> That way, an ArrayList can at comptime decide whether to try doing that
<fengb> Maybe the allocator interface can return the “block size”
<andrewrk> allocAllowExtra()
<fengb> And then we can expose a few more functions to use that extra space
<shachaf> I mean, how much extra is something that ArrayList knows and the allocator doesn't.
<fengb> Allocator details can toss in extra space just because
<andrewrk> shachaf, I mean that ArrayList could ask for, e.g. 100 bytes, and receive 200. ArrayList can handle this perfectly; set capacity to 200
<shachaf> Sure, but some people really only need 100 and have no use for the other 100.
<fengb> ZeeAlloc only allows perfectly aligned 2^n memory chunks so I’ve been lying by giving back exact amounts
<shachaf> I guess you can have an interface where you specify a minimum and a maximum, and you want the minimum in case of in-place growth and the maximum otherwise.
<andrewrk> this could be implemented simply by relaxing the restriction that an allocator implementation of realloc() must return a slice with the same length as requested
<andrewrk> it would mean allowing to return a larger slice
<shachaf> fengb: Right, that's the thing I meant by malloc_usable_size() (which is a different but related issue).
<andrewrk> shachaf, the whole point here, is that the allocator allocates the requested size, but might have "free" "extra bytes" right?
<andrewrk> the allocator interface can always pretend it got back fewer bytes than the implementation actually returned
<shachaf> There are two different cases: One where the allocator has extra bytes that it's not going to use anyway, and one where it might be able to use them for other allocations.
<pixelherodev> Comptime == <3
<andrewrk> shachaf, I argue that second case does not exist. remember that the API user has requested a specific size
<shachaf> The second case is the classic use case for realloc.
<shachaf> Where you just happen to have free address space after the allocation, for whatever reason (it's a classic sbrk-based allocator and it was your last allocation, or whatever).
<pixelherodev> Proposal: ArrayList growth should be `if (current_cap < 100K) existing else 2 * current_cap`?
<pixelherodev> That is, 1.5 * current + 8 until it hits some threshold at which point it increases to 2 * current?
<pixelherodev> That only reduces steps for my case to 26, but I'm also thinking more generally than my case
<andrewrk> I suspect this is not the area to focus on for perf gains for your project
<pixelherodev> Definitely not
<pixelherodev> I think that might be generally useful though
<shachaf> If your array is going to grow a lot you can just start it off bigger.
<pixelherodev> I already do
<pixelherodev> Problem is, that's not a really good solution
<pixelherodev> A simple hello world can probably get by in <5K tokens
<pixelherodev> The lexer right now is ~800K
<shachaf> In some cases it is? Address space is pretty cheap.
<pixelherodev> A larger program would likely be much much larger
<pixelherodev> Problem is, there's no number that really avoids the problem
<pixelherodev> The growth rate is huge
<pixelherodev> If I go too small, large compilations suffer; too large, startup time becomes larger
<seoushi> can you base it off the size of the file(s)?
<pixelherodev> I think that going large is better though, because a single allocation of 1M tokens is trivial, but a dozen reallocations is costly
<pixelherodev> That's not a bad idea
<pixelherodev> For the sake of performance, I'm requiring the input stream to be a file anyways
<pixelherodev> As anything else is terrible perf-wise
<pixelherodev> (in terms of altered design requirements)
<pixelherodev> The IR for the lexer is 6MB, so doing `file size / 6` is probably okay
<pixelherodev> Might be better to get a bit more data (test average token size in a bunch of projects and use that as the base line)
<shakesoda> for the case of `[_][3]foo { [_]foo{ 1, 2, 3 }, [_]foo{ 1, 2, 3 } }`, can zig infer the elements or must i write the type out so many times
<shakesoda> it's already written on the outside, so it seems kind of extra
<shakesoda> oh, i can do this with anonymous lists? i was looking at the multidimensional array section
<shakesoda> thanks
<pixelherodev> There a way to reduce slice length type from usize?
<pixelherodev> e.g. if I have a buffer of len 255, is it possible to have `buf.len` be of type u8 without casting it?
<pixelherodev> Nah, that'd be too easy :P
<seoushi> I was wondering that as well. I was going to test that out by using the range syntax.. items[0..10] and see what @size prints
<seoushi> oh nm I missread that
<pixelherodev> ...hah! The reason it's taking so long? This is in debug mode!
<pixelherodev> In release mode, it now takes ~0.4 seconds
<seoushi> lol
<shakesoda> :)
<pixelherodev> On the bright side, I probably wouldn't have release speeds this low if I hadn't been annoyed by perf in debug mode :)
<shakesoda> debug perf matters, regardless
<pixelherodev> True
<shakesoda> can't have your program driving you insane while making it
<pixelherodev> Also, the page_allocator is sufficiently fast now :)
<pixelherodev> Now that the capacity is defaulted to a large number
<pixelherodev> Instead of taking 28 reallocs
<shakesoda> andrewrk: zig tetris makes for great example code (fun, too)
knebulae has quit [Read error: Connection reset by peer]
<daurnimator> mq32: pacman groups are generally seen as a bad idea. we're moving to metapackages
<daurnimator> shakesoda: ooo, idea for deprecation: `std.meta.deprecated` that uses a build-time option to either be a @compileError or just allowed.
<pixelherodev> How does that work?
<pixelherodev> Example usage?
<daurnimator> pixelherodev: for a function you could wrap it? that would be easier with functions-as-expressions. or perhaps on the return value?
<daurnimator> Could also have only certain code paths in a function deprecated
seoushi has quit [Ping timeout: 240 seconds]
<andrewrk> shakesoda, one thing interesting about the tetris codebase is that it worked a long, long time ago, early into zig's development cycle, and has basically remained unchanged
<pixelherodev> daurnimator, ah, you mean that you just call `std.meta.deprecated();` to mark a path as, well, deprecated?
<pixelherodev> I like that
<daurnimator> pixelherodev: yep
<daurnimator> possibly passing a reason/comment
<andrewrk> daurnimator, neat idea!
<daurnimator> I was thinking it would be: `pub fn deprecated(val: var, comptime reason: []const u8) @typeOf(val) { if (!build_options.allow_deprecated) @compileError("deprecated: " ++ reason); return val; }
<daurnimator> or actually just @compileLog
<pixelherodev> What would be neat is if you could make it non-fatal
<pixelherodev> So it prints the deprecation message even if it compiles
<daurnimator> pixelherodev: zig doesn't allow that
<pixelherodev> I know
<pixelherodev> That's what I was saying
<pixelherodev> Does the hashmap type work at comptime? Had an idea
<daurnimator> no
<daurnimator> allocators currently can't work at comptime
<pixelherodev> Right, that's what I thought
<pixelherodev> Whelp, I won't be able to automate this refactor...
<andrewrk> non-fatal messages are inherently problematic, especially when you consider caching
<pixelherodev> I mean, you could cache the messages?
<pixelherodev> But yeah I see your point
<daurnimator> I like the idea that people would have to explicitly ask to use deprecated functions
<pixelherodev> What would be nicer was if it was on a per-function / per-module basis...
<daurnimator> Yeah you could do that
<pixelherodev> Huh, yeah; could require a comptime block that accepts a specific deprecation string
<pixelherodev> e.g. `comptime { std.meta.allow_deprecated("std"); } `, and in zag code, it'd do `std.meta.deprecated("std", "deprecation message");`
<daurnimator> var allowed = false; inline for (mem.separate(u8, build_options.allow_deprecated, ",")) |option| if (option == @typeName(val)) allowed = true; } if (!allowed) @compileError("deprecated: " ++ reason ++ ", pass -Dallow_deprecated=" ++ @typeName(val) ++ " to ignore."); return val; }
<pixelherodev> Doesn't mem.separate use an allocat - oh wait no, it uses an iterator
<pixelherodev> Are `.{}` comptime known?
<daurnimator> yeah that was a rough approximate. you'd need to use mem.eql too
<pixelherodev> I figured
<pixelherodev> `comptime magics = .{ "target datalayout", "target triple", "source_filename" };` gives "undefined value causes undefined behavior" :(
<pixelherodev> Going to try updating Zig and hoping this is a bug that's been fixed alread
<pixelherodev> y
<pixelherodev> Whoops, it's the actual *usage* that gives that...
<pixelherodev> but it happens on trunk
<pixelherodev> I can get it working if I change all instances to slices manually, which is a pain but probably necessary to clean this up
<pixelherodev> There a better way to get comptime usable `[]const []const u8` than `([_][]const u8{"","",""})[0..]`?
<daurnimator> &
<pixelherodev> `.{"",""}` freaks out
<pixelherodev> I tried that, going to try it again
<pixelherodev> `Okay, yeah, I put it in the wrong spot the first time
<pixelherodev> `&.{}` still fails
<pixelherodev> But `&[_][]const u8` works fine
<pixelherodev> Oh wait, I can regex this!
<daurnimator> andrewrk: just caught up on stream now
<daurnimator> andrewrk: is https://github.com/ziglang/zig/pull/4304 mergable? do we need to figure out that CI failure?
blueberrypie has quit [Quit: Ping timeout (120 seconds)]
blueberrypie has joined #zig
<andrewrk> does that test pass for you locally?
metaleap has joined #zig
<daurnimator> andrewrk: yes
mahmudov has joined #zig
_Vi has joined #zig
<daurnimator> According to that article AMD Zen supports the PEXT instruction; but its super slow.
<daurnimator> This would be a reason to check the model name and dispatch to the relevant function; rather than dispatching on features alone
<mikdusan> "up to 289 cycles"
<pixelherodev> Wowzers
<pixelherodev> Why even bother?
<pixelherodev> Why not just, not microcode it?
<pixelherodev> Leave it unsupported?
<mikdusan> botched hardware?
<pixelherodev> The hardware's fine
<pixelherodev> They basically just programmed the CPU to decode PEXT into a bunch a bunch of other, better supported instructions
<pixelherodev> On a different note, while embracing comptime might speed this up, compilation speed has gone down the toilet :(
<mikdusan> what if they hardwired it and botched that. then forced to do microcode impl
<pixelherodev> Maybe...
knebulae has joined #zig
dddddd has joined #zig
<betawaffle> is it possible/feasible to link with objective-c and/or swift libraries? how about without some sort of intermediary C code?
<mq32> betawaffle, i don't think it's reasonable to provide links to other languages that don't use C ABI
<betawaffle> yeah, that was my assumption. i don't get why they don't have a C api
<mq32> because it's hard to map language features to C abi
<daurnimator> betawaffle: I've been saying we need good zig<>objective C bindings
<daurnimator> it should actually be a fun project
<betawaffle> what does objective-c's abi look like?
<betawaffle> isn't it just weird naming of symbols or something?
<daurnimator> you need to use the objective C runtime
<betawaffle> uhg
<daurnimator> which apple deleted the docs of a couple of years ago
<daurnimator> so you have to use the wayback machine to look at the docs....
<betawaffle> so stupid
<betawaffle> i wish all OSes had stable syscall APIs
<betawaffle> daurnimator: ok, so just confirming, i can't realistically play with Metal from zig, right?
<daurnimator> windows is an interesting case.... they provide a preloaded dll (sort of like a vdso) that contains stubs that call the actual syscall ABI where the syscall numbers change with every minor release
<daurnimator> betawaffle: I've never looked at Metal before
<daurnimator> that preloaded dll (ntdll) is pretty stable; only really changing at quite major releases.
<daurnimator> and even then the changes are minior
<daurnimator> betawaffle: but if its a normal objective C api... I don't see why you couldn't
<betawaffle> how do you link against objc?
<betawaffle> it's got classes and those weird colon argument things
dddddd has quit [Ping timeout: 260 seconds]
<daurnimator> betawaffle: you start with e.g. objc_getClass
<betawaffle> oh interesting
<daurnimator> betawaffle: you would start by @cImport-ing objc/runtime.h
<betawaffle> ok, this seems doable. thanks!
<betawaffle> does obj-c code actually compile down to invocations of these functions?
mahmudov has quit [Ping timeout: 272 seconds]
<daurnimator> betawaffle: IIRC it goes via a function `objc_msgSend` which is actually inline assembly?
<daurnimator> you might find http://sealiesoftware.com/msg/index.html interesting
<daurnimator> oh you are meant to call it in the dynamic library
<betawaffle> and what do i need in my build.zig to link with the objc runtime library?
<betawaffle> "Objective-C runtime library support functions are implemented in the shared library found at /usr/lib/libobjc.A.dylib."
<daurnimator> betawaffle: I think you just need to link against /usr/lib/libobjc.dylib
<betawaffle> right, i'm not super familiar with build.zig's stuff yet
<daurnimator> I'm not familiar with OSX :P
<daurnimator> betawaffle: I think you might use `linkSystemLibraryName("libobjc")` ?
<daurnimator> uh, without the "lib"
<betawaffle> heh, [*c].cimport:1:20.struct_objc_class
<betawaffle> is there some way i can see what zig code my cImport actually generates?
<daurnimator> betawaffle: sure, just run `zig translate-c yourfile.h`
<daurnimator> betawaffle: alternatively you can poke around in zig-cache...
<betawaffle> found it
<betawaffle> is there a proper way to convert a [*c]something into a ?*something?
<daurnimator> betawaffle: it should coerce with @as
<betawaffle> maybe i need a wrapper to do the type inference...
<daurnimator> betawaffle: hmm? you'll need to share what you're doing/getting
<betawaffle> ok, so `pub const Class = [*c]struct_objc_class;`
<daurnimator> betawaffle: maybe just pastebin the entire output?
<betawaffle> sure, sec
<daurnimator> betawaffle: FWIW I'm bored and killing time. happy to pair with you for 30mins to an hour if you want
<betawaffle> so, i could do ?*struct_objc_class, but that name is kind of an implementation detail
<betawaffle> (and it's generated, so i'm a bit weary to "rely" on it)
<betawaffle> daurnimator: that's basically what i'm doing right now :P
<daurnimator> why does this matter?
<daurnimator> So have you been able to e.g. run `const NSString: objc.class = objc.objc_getClass("NSString");`?
<betawaffle> yeah
<betawaffle> do you think it might be a better idea to take this generated zig code, and just strip it down and fix types by hand?
<daurnimator> no. you shouldn't need to
<betawaffle> or should i just try to work with the [*c] types
<daurnimator> You should be fine just working with the [*c] types.
<daurnimator> you should essentially be none-the-wiser
<betawaffle> can i use a [*c] as if it's an optional pointer? in if () and such?
<daurnimator> I..... think so? though I'm not 100% sure
mahmudov has joined #zig
<metaleap> so you have a StringHashMap(T). when i put(k,v) i get "the old KV". same with remove(k).
<metaleap> now for `remove` i'd clearly `alloc.free` the key and value just removed, that's trivial
<metaleap> same for put BUT not sure about the freeing `KV.key` !
<metaleap> in some cases "old" and "new" key will point to the same addr, in others not.
<metaleap> so i checked and the newkey gets stored and the oldkey returned, for put. i guess caller must compare addrs of newkey/oldkey before freeing the oldkey (that might also be the newkey). luckily not in my current use case :P
<daurnimator> metaleap: yeah. in http/headers.zig I do a .get() and if nothing returned, do a .put()
<metaleap> ack. http/headers.zig looks sufficient to run a zig proggie behind CGI or today's equivalent. sth to play with one of these days
<daurnimator> metaleap: it's not and shouldn't be...
<metaleap> aw
<daurnimator> metaleap: http/headers.zig is just a tiny piece of what should become a proper http client/server
<daurnimator> it just wasn't blocked on any external factors; so I sent the PR for it early
<daurnimator> betawaffle: how are you going?
<betawaffle> good, trying to convert c strings to slices
<daurnimator> mem.toSliceConst ?
<betawaffle> looks like it worked, thanks
mahmudov has quit [Ping timeout: 260 seconds]
mahmudov has joined #zig
qazo has joined #zig
<betawaffle> ooo, it's snowing here
marijnfs_ has joined #zig
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 272 seconds]
<daurnimator> betawaffle: I'm thinking you could end up with something like: http://sprunge.us/etuDPy
<betawaffle> neat
<daurnimator> that's all written off the top of my head
<daurnimator> so there's going to be errors :p
<daurnimator> maybe even uncover a few zig bugs/missing features
drp has joined #zig
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 256 seconds]
dingenskirchen has quit [Quit: dingenskirchen]
dingenskirchen1 has joined #zig
dingenskirchen1 has quit [Client Quit]
dingenskirchen has joined #zig
meraymond2 has joined #zig
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 256 seconds]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 240 seconds]
meraymond2 has quit [Remote host closed the connection]
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 260 seconds]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 265 seconds]
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 265 seconds]
benjif_ has quit [Quit: Leaving]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 272 seconds]
BaroqueLarouche has joined #zig
dddddd has joined #zig
lunamn has quit [Ping timeout: 256 seconds]
lunamn has joined #zig
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 268 seconds]
BaroqueLarouche has quit []
BaroqueLarouche has joined #zig
<BaroqueLarouche> (small test just ignore)
<BaroqueLarouche> good Matrix bridge works again
marijnfs has quit [Ping timeout: 256 seconds]
<pixelherodev> Gah, comptime *really* slows down compilation to an intolerable degree (at least when abused like this)
<pixelherodev> I have a function that takes a comptime string and returns the corresponding token type
<pixelherodev> Has a bunch of if (std.mem.eql)/else branches
<BaroqueLarouche> you should retry it when the comptime will be properly coded and optimized
<pixelherodev> I know, but for now I'm going to just keep pushing the backwards branches quota higher until it compiles again :P
waleee-cl has joined #zig
marijnfs has joined #zig
mahmudov has quit [Ping timeout: 240 seconds]
<metaleap> pixelherodev: so.. a "comptime string" is a comptime-known string. and your std.mem.eqls also happen at comptime? so you're comparing 2 comptime-known string literals per eql, logically?
marijnfs_ has joined #zig
<metaleap> cant you uh enumify these fully-well-known-comptime-strings
mahmudov has joined #zig
marijnfs has quit [Ping timeout: 260 seconds]
<metaleap> at least for passing around and the cmp-ing, then stringify at the last necessary moment or sth+
<pixelherodev> I fo
<pixelherodev> do*
<pixelherodev> Well
<pixelherodev> Okay, better explanation: I have a function LexingIterator.substr_match(comptime str_list: [][]const u8. criteria: AcceptableMatch) which checks if any of the strings in the list match the given criteria
<pixelherodev> If one does, it's passed via a comptime call to a function which returns the token type
<metaleap> sounds pretty meta °^°
marijnfs has joined #zig
<pixelherodev> I mean, I'm pretty sure there's a performance boost at runtime (but it's hard to tell because it's erroring out after the first 70k lines now because the refactor isn't complete), and the code is definitely cleaner and easier to read, so... worth it?
marijnfs_ has quit [Ping timeout: 272 seconds]
<metaleap> in that case i'd say yup
marijnfs has quit [Ping timeout: 256 seconds]
Akuli has joined #zig
return0e has quit [Ping timeout: 240 seconds]
plumm has joined #zig
<plumm> andrewrk: for the dump analysis feature, do you think its stable/feature full enough to use in a language server? I'm writing one right now and it's really hard to navigate / tell what things are (under the decls object)
marijnfs has joined #zig
<andrewrk> plumm, it's the best we're going to get in stage1
<andrewrk> I recommend to look at lib/std/special/docs/main.js to understand how the data layout works
<andrewrk> this is the code that renders the auto generated documentation. it uses that very same json dump
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 260 seconds]
<jaredmm> Is that the third or fourth language server being developed? Who's going to win!
<andrewrk> plumm, you can also use firefox to view large json files with a tree gui
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 272 seconds]
<plumm> andrewrk: thanks for the tips
qazo has quit [Remote host closed the connection]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 260 seconds]
return0e has joined #zig
qazo has joined #zig
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 260 seconds]
<jaredmm> If I wanted to say an array of u8 was an array of u16, would that be @bitCast([]u16, type)?
marijnfs_ has joined #zig
<andrewrk> @bytesToSlice
marijnfs has quit [Ping timeout: 272 seconds]
<andrewrk> this is probably deprecated in favor of some userland std.mem function
knebulae has quit [Read error: Connection reset by peer]
knebulae has joined #zig
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 268 seconds]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 268 seconds]
<andrewrk> oh shit!! guess who just ran behavior tests, all passing with --test-evented-io ?
<betawaffle> you?
<andrewrk> good guess
<betawaffle> CI?
<BaroqueLarouche> man 0.6 release is gonna kick ass for those not tracking master
<andrewrk> let's see if I can get std lib tests passing in evented I/O mode
ur5us has joined #zig
<andrewrk> error: '@Frame(atomic.queue.S.dumpRecursive)' depends on itself
<andrewrk> yep. it sure does
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 268 seconds]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 265 seconds]
<andrewrk> std lib tests with --test-evented-io is finding all the recursion...
<plumm> mikdusan: do you have recorded steps for compiling zig on mac os? I can't get static zig and even without it i run into some odd errors. I've uninstalled everything and tried both ways in the readme and the ci file, but no dice
<andrewrk> why do you want static zig?
<plumm> andrewrk: i dont, but i was trying every recorded method of building
<andrewrk> I recommend to follow the readme instructions
<plumm> andrewrk: followed those but i get a linker error when linking zig0
marijnfs_ has quit [Ping timeout: 240 seconds]
marijnfs has joined #zig
<plumm> theres gotta be something im doing wrong because brew install zig --HEAD compiles just fine, so I wonder whats wrong with my environment
<andrewrk> here is a flow chart for solving the problem: 1. follow the recommended path 2. when asking for help, provide enough details so people can troubleshoot
<andrewrk> if I would have started talking about static zig on macos without questioning why you wanted it, we'd be off on a red herring
Akuli has quit [Quit: Leaving]
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 272 seconds]
<plumm> andrewrk: i started from scratch and followed those steps, this is what happened: https://gist.github.com/ea49b11928be6fd14f46afe3fb221b73
<andrewrk> plumm, can you verify that the same compiler is used to build the homebrew package and zig?
<andrewrk> this looks like C++ ABI mismatch between what was used to build the homebrew packages and zig
<plumm> andrewrk: from looking at the homebrew formula it seems to just be calling cmake and make, nothing special i dont think, ill try and see
<andrewrk> you can always build llvm/clang from source if homebrew packages don't work for you
<plumm> homebrew packages DO work, I just tried making some local modifications
rjtobin has joined #zig
rjtobin has quit [Remote host closed the connection]
rjtobin has joined #zig
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 272 seconds]
<mikdusan> I think andrew's right -- "verify that the same compiler is used to build the homebrew package and │ daurnimator
<mikdusan> │ | zig?yes
<mikdusan> oops garbled
<mikdusan> so at end link error "___gxx_personality_v0" ref'd by (some LLVM symbols)... looks like brew's LLVM was built with gcc and/or libstdc++
<plumm> 3
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 260 seconds]
metaleap has quit [Quit: Leaving]
metaleap has joined #zig
plumm has quit [Ping timeout: 265 seconds]
Snetry has quit [Quit: left Freenode]
Snetry has joined #zig
<pixelherodev> Yep! Performance *did* increase, by a noticeable amount!
ur5us has quit [Ping timeout: 240 seconds]
<pixelherodev> In debug mode, takes ~1.65 seconds (down from ~1.9)
<fengb> Spill harder XD
<pixelherodev> In release mode, perf increase is smaller, but still present (now takes ~0.43 seconds, down from ~0.45-~0.5)
reductum has joined #zig
marijnfs_ has quit [Ping timeout: 240 seconds]
marijnfs has joined #zig
plumm has joined #zig
marijnfs has quit [Ping timeout: 240 seconds]
marijnfs_ has joined #zig
<pixelherodev> Make that 0.41-0.42 :)
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 268 seconds]
reductum has quit [Quit: WeeChat 2.7]
seoushi has joined #zig
<pixelherodev> And, as if to prove the code is simpler, the IR shrunk by a good chunk also :)
marijnfs_ has joined #zig
<pixelherodev> (though I suppose that really just reflects the increased focus on comptime which isn't fundamentally simpler so...)
marijnfs has quit [Ping timeout: 265 seconds]
<seoushi> just got done watching the last live stream.. wish I would of watched it live, could of helped a quite a bit on what sdl functions to use. I do disagree with the frame based approach tho. All the games I have worked on are frame independent. Basically you use delta time to apply things. So you can render your game at 60 fps, 120fps or whatever the rate of the monitor is and the speed of the animations are the same.
<pixelherodev> Delta time is nice :)
<seoushi> yeap
<pixelherodev> It's how I made my emulator pretend to be fast when running as WASM
<BaroqueLarouche> deltaTime FTW
<pixelherodev> Which reminds me, I should take that down
<pixelherodev> It was a fun experiment, but I'm *not* going to encourage usage of WASM even just a little bit
<BaroqueLarouche> if you use variable delta time, it will work even if you use vsync or not
<seoushi> yeah
<plumm> pixelherodev: why?
<seoushi> I did learn about embedding files into the binary tho. that is cool. I do wonder if it makes compile times larger if you have a large data set or not tho. Seems like a nice way to package everything together tho
<plumm> seoushi: try `@embedFile("/dev/urandom")` :)
<seoushi> lol
marijnfs has joined #zig
marijnfs_ has quit [Ping timeout: 256 seconds]
<BaroqueLarouche> @embedFile is so useful for platforms without a filesystem like the GBA
<pixelherodev> plumm, because the modern web is a freaking mess and WASM is like trying to put a band aid on a missing limb?
<pixelherodev> Or, rather, on extra limbs that just grew out of nowhere?
<fengb> Sigh wasm isn’t the web and has almost nothing to do with it
<pixelherodev> I mean, *I* don't run JS/WASM almost *ever*
<pixelherodev> Rephrase then: I'm not going to encourage usage of Emscripten / "webapps"
<seoushi> BaroqueLarouche, when I did 3ds programming we basically had a program that would take an image and create a c file for resource embedding.
<pixelherodev> You mean `xxd -i`? :P
<pixelherodev> Or `convert`?
<BaroqueLarouche> seoushi: homebrew or commercial ^
<BaroqueLarouche> ?
<seoushi> BaroqueLarouche, commercial
<BaroqueLarouche> nice!
<BaroqueLarouche> commerially I only worked on PS3, Xbox 360, PS4, Xbox One, Switch and PS Vita
<BaroqueLarouche> *commercial
<seoushi> I only have done 3ds and mobile (ios, android and brew)
<seoushi> sorry I mean ds.. the 3ds wasn't around back then hah
marijnfs_ has joined #zig
marijnfs has quit [Ping timeout: 256 seconds]