ChanServ changed the topic of #zig to: zig programming language | https://ziglang.org | be excellent to each other | channel logs: https://irclog.whitequark.org/zig/
<marler8997> 14 open issues
<pixelherodev> Yep :)
mokafolio has quit [Quit: Bye Bye!]
mokafolio has joined #zig
joaoh82 has joined #zig
joaoh82 has quit [Ping timeout: 240 seconds]
CommunistWolf has quit [Ping timeout: 260 seconds]
CommunistWolf has joined #zig
greisean has quit [Ping timeout: 264 seconds]
a92 has joined #zig
cole-h has quit [Ping timeout: 240 seconds]
lunamn has joined #zig
osa1_ has joined #zig
osa1 has quit [Ping timeout: 240 seconds]
a92 has quit [Quit: My presence will now cease]
earnestly has quit [Ping timeout: 264 seconds]
xackus has joined #zig
xackus_ has quit [Ping timeout: 240 seconds]
joaoh82 has joined #zig
spiderstew has quit [Ping timeout: 244 seconds]
waleee-cl has quit [Quit: Connection closed for inactivity]
xackus has quit [Ping timeout: 272 seconds]
marnix has joined #zig
marnix has quit [Read error: Connection reset by peer]
marnix has joined #zig
xackus has joined #zig
Techcable has quit [Ping timeout: 256 seconds]
nullheroes has quit [Ping timeout: 256 seconds]
nullheroes has joined #zig
Techcable has joined #zig
isolier2 has joined #zig
isolier has quit [Read error: Connection reset by peer]
isolier2 is now known as isolier
nullheroes has quit [Ping timeout: 240 seconds]
Techcable has quit [Ping timeout: 256 seconds]
marnix has quit [Ping timeout: 272 seconds]
marnix has joined #zig
Techcable has joined #zig
pfg_ has quit [Quit: Leaving]
nullheroes has joined #zig
lucid_0x80 has joined #zig
marnix has quit [Read error: Connection reset by peer]
marnix has joined #zig
dumenci has joined #zig
lucid_0x80 has quit [Ping timeout: 260 seconds]
<pjz> I have a const []u8 that I got from @embedFile... presuming the sizes match, how do I cast it to a [x][u]u8?
decentpenguin has quit [Read error: Connection reset by peer]
decentpenguin has joined #zig
osa1_ is now known as osa1
joaoh82 has quit [Ping timeout: 272 seconds]
mgxm has quit [Ping timeout: 256 seconds]
joaoh82 has joined #zig
gpanders has quit [Ping timeout: 256 seconds]
gpanders has joined #zig
sord937 has joined #zig
osa1 has quit [Quit: osa1]
cole-h has joined #zig
osa1 has joined #zig
mgxm has joined #zig
Kuraitou has quit [Quit: No Ping reply in 180 seconds.]
RoguePointer has quit [Quit: <>]
RoguePointer has joined #zig
gazler_ is now known as gazler
joaoh82 has quit [Ping timeout: 246 seconds]
<ifreund> pjz: casting const to mutable and modifying the memory would be UB
<ifreund> if you're ok with a *const [x][u]u8 then a @ptrCast() should work fine
hspak has quit [Quit: Ping timeout (120 seconds)]
hspak has joined #zig
hnOsmium0001 has quit [Quit: Connection closed for inactivity]
cole-h has quit [Ping timeout: 256 seconds]
earnestly has joined #zig
marnix has quit [Remote host closed the connection]
marnix has joined #zig
hlolli_ has joined #zig
wilsonk has quit [Read error: Connection reset by peer]
veltas has joined #zig
fntlnz has joined #zig
marnix has quit [Ping timeout: 256 seconds]
marnix has joined #zig
koakuma has joined #zig
<ikskuh> heya
<koakuma> Hey
<koakuma> Anyone knows what is happening here? I've never done dev on Windows and I don't understand anything about this error
<ikskuh> is there a reason symbols might be unresolved on windows, but not on linux? they are defined in the correct def file
<koakuma> Hmmm, dunno. The PR that causes the error only changed some of the endianness macros, it shouldn't have any effect on lld, right?
marnix has quit [Read error: Connection reset by peer]
marnix has joined #zig
osa1_ has joined #zig
osa1 has quit [Ping timeout: 256 seconds]
osa1_ is now known as osa1
koakuma has left #zig [#zig]
<ikskuh> nevermind my previous question
fntlnz has quit [Remote host closed the connection]
midgard has quit [Ping timeout: 260 seconds]
midgard has joined #zig
notzmv has quit [Ping timeout: 265 seconds]
fntlnz has joined #zig
fntlnz has quit [Ping timeout: 256 seconds]
midgard has quit [Quit: midgard disappears from view]
midgard has joined #zig
donniewest has joined #zig
waleee-cl has joined #zig
donniewest has quit [Client Quit]
donniewest has joined #zig
notzmv has joined #zig
Biolunar has quit [Ping timeout: 264 seconds]
<marler8997> andrewrk, been thinking about what you said about concrete types vs generics, it's making me think about what Zig would look like without generics, actually, I think it could be done. Basically any time you would need generics, you just use indirection instead (this is what Python does under the hood). Something to ponder about
Biolunar has joined #zig
<ifreund> marler8997: I'm curious about what you're saying but I don't follow. You mean zig without comptime? "generics" are kind just a side affect of that feature
<ikskuh> marler8997: can you elaborate that?
<ikskuh> every variable in python is just of zig type "anytype" :D
<marler8997> comptime and generics are orthogonal
<marler8997> by generics the only thing I mean is "anytype"
<marler8997> so, replace every anytype with AnyType, which contains a pointer to a value and type metadata
<companion_cube> Then it has runtime costs
<marler8997> This may seem like it incurs runtime overhead, but the idea is that Zig's optimizer should be able to remove any overhead incurred
<companion_cube> How do you allocate such values?
<marler8997> companion_cube, well first, you wouldn't need to allocate anything if the optimizer determines you don't need to
<g-w1> would this allow types to be used at runtime?
<ikskuh> i think it's a horrible idea and adding more complexity than there is right now
<marler8997> otherwise, if you do need the overhead, the compiler has multiple methods available such as 1) instantiating multiple functions (basically what generics do today) 2) instantiating one function that supports variant types
<dominikh> I shouldn't have to write my code differently depending on how good the optimizer was, especially because optimizer decisions change over time
<companion_cube> marler8997: allocations are explicit, the optimizer won't remove them AFAIK
<ikskuh> marler8997: relying on the optimizer is a horrible idea and imho leads to bad design desitions
<marler8997> companion_cube: you wouldn't allocate anytype parameters explicitly
<dominikh> if the compiler has to fall back to generating multiple functions, anyway, what's the benefit of having dynamic dispatch sometimes?
<ikskuh> AnyType is not anytype though
<dominikh> actually, what's the benefit at all here?
<marler8997> I would hold up making any judgements on the topic without fully understanding it
<ikskuh> your proposed AnyType doesn't seem to have compile time invariants
<g-w1> would this be dynamic dispatch for anytype?
<marler8997> g-w1, yes
<ikskuh> also why a pointer?
<ikskuh> and what type has that metadata?
<ikskuh> std.builtin.TypeInfo?
<ikskuh> is it runtime accessible?
<dominikh> ikskuh: pointer because your AnyType needs to have a known, fixed size now
<marler8997> is it runtime accessible, it could be or it could not be
<marler8997> why a pointer? because Zig's model requires parameters to have a fixed size
<marler8997> so, you would code all these things like they are available at runtime
<ikskuh> so you cannot ever store a value of anytype
<marler8997> but then the optimizer could remove everything
<companion_cube> `&dyn Any`, in rust parlance..
<ikskuh> *AnyType
<companion_cube> (which is not optimized away)
<g-w1> this actually seems like a way to make zig simpler imo
<ikskuh> because i can not store a value in AnyType
<ikskuh> you can also do everything with todays semantics
<marler8997> AnyType wouldn't be a pointer
<dominikh> there's nothing unique here, it's just a pointer to type information or a vtable or however you want to implement it, and a pointer to the data. Rust's dyn, objects in many dynamic languages, interface values in Go, ...
<ikskuh> but it conains a pointer
<marler8997> I mean, it wouldn't bey *AnyType
<ikskuh> no, it would be "*i32"
<marler8997> yes it contains a pointer
<ikskuh> which means i either need an allocator
<ikskuh> to free the value
<ikskuh> or i do not own the value
<marler8997> well, not necessarily, the poitner could be optimized away
<companion_cube> Sounds like a fat pointer with type info, or a vtable
<ikskuh> which means i'm not do decide to store it for later use
<ikskuh> marler8997: const T = struct { foo: AnyType };
<ikskuh> var t: T = undefined;
<marler8997> ikskuh yes, you're no longer the one who decides these things, the optimizer does, that is a key difference here
<ikskuh> { var i: i32 = 10; t = { .foo = i }; }
<ikskuh> log.info("{}", .{ t });
<ikskuh> now i have accessed invalid memory
<marler8997> fn foo(x: AnyType) void { }
<marler8997> foo(100)
<ikskuh> point is:
<ikskuh> i can store values that are passed as anytype
<ikskuh> but cannot store AnyType
<marler8997> you can store anytype
<marler8997> I mean AnyType :)
<marler8997> but keep in mind, the optimizer can remove any need to store AnyType at runtime
<ikskuh> only if i ever allow AnyType to be used as arguments
<ikskuh> and forbid it to be stored anywhere except the stack
<marler8997> no
<ikskuh> otherwise it will be a footgun par excellence
<marler8997> even if you use AnyType as an argument, it doesn't mean you'll need to support it at runtime
<dominikh> marler is proposing to use dynamic dispatch when the optimizer can optimize the pointer indirection away, and to fall back to monomorphization if it can't. I still fail to see the benefit of that, but hey
<ikskuh> as it's a pointer, but you don't need to pass a pointer
<ikskuh> and i can't see how that wouldn't be the biggest footgun ever
<marler8997> dominikh I don't know what monomophization means but it *sounds* correct...lol :)
<ikskuh> i don't even talk about dispatch or whatsoever
<marler8997> ikskuh you wouldn't pass a pointer
<dominikh> marler8997: generate multiple copies of a function, one per concrete type of the generic argument
<companion_cube> ikskuh: you mean, only if it's a ref :)
hnOsmium0001 has joined #zig
<companion_cube> As in C++ or rust
<ikskuh> companion_cube: marler8997 said it's always a ref
<marler8997> dominiky, the optimizier would HAVE to support that as well
<marler8997> it's always a "ref" in terms of source code, but it could be a value once the optimizer gets through
<dominikh> but your degenerate case is just the same as the status quo, so what's the benefit of dynamic dispatch?
<ikskuh> marler8997: how can i do this then? fn(foo: anytype) @TypeOf(foo){ return foo; }
<companion_cube> No I mean the always as arg, on stack, etc. behaviof
<companion_cube> That's a ref :)
<ikskuh> marler8997: the optimizer isn't in place here
<ikskuh> please think broader
<marler8997> dominikh, the benefit is the language no longer needs to support generics with anytype
<ikskuh> and accept that we can store *any* type in structs
<ikskuh> it will still need it. two examples are above
<ikskuh> "the optimizer" is a implementation detail
<ikskuh> my examples don't require code generation to fail
<companion_cube> marler8997: so simplifying a bit the language but making it dependent on an optimiser? Meh
<marler8997> ikskuh, you're acting as if I'm a proponent of this feature and you're trying to attack it saying you don't want it
<marler8997> I'm not saying I want this feature
<marler8997> I'm just wanting to discuss the implications of it
<ikskuh> it's not that i don't want it
<marler8997> what would the language look like
<ikskuh> i'm just discussion the implications ;)
<ikskuh> like "everything explodes when people use it a tad wrong"
<marler8997> "the optimizer isn't in place"...I'm not sure why you said that then
<marler8997> the discussion assumes the optimizer could do this
<ikskuh> because it breaks at semantic levels already
<ikskuh> before the optimizer has even run
<ikskuh> or any code was generated
<marler8997> ikskuh you're example wasn't the proposed semantics
<marler8997> fn (foo: AnyType) void {} foo(100)
<companion_cube> If you take AnyType to be GObject I think you get what marler8997 is talking about
<marler8997> from the callers perspective, there are no pointers
<ikskuh> i got to go now
<ikskuh> i hope i can discuss it in roughly an hour
<dominikh> step 1: pretend the optimizer doesn't exist. the optimizer is there to _optimize_ your code, not change its semantics.
<ikskuh> i want to understand why my examples will not break the language then
<dominikh> any mention of the compiler is irrelevant for discussing the _semantics_
<marler8997> ikskuh yeah me too, if you find an issue would be good to know
<dominikh> *of the optimizer
<marler8997> dominikh yes
<companion_cube> +1 for semantics not relying on an unspecified optimiser
<marler8997> but the first thing people say is "runtime overhead"
<dominikh> ignore runtime overhead
<marler8997> ok, good idea
<marler8997> discuss the merits of removing generics from the language for dynamic dispatch?
<companion_cube> First talk about how it works, then about how to make it fast
<dominikh> I still don't see the benefit on a semantic level. "Zig wouldn't need to support anytype", no, but now it needs to support AnyType, which is virtually the same thing in terms of desired semantics. people will want access to the type information, for example
<dominikh> and access the fields of specific, concrete types
<companion_cube> dominikh: people would access type info at runtime, not comptine, correct ?
<companion_cube> That's a big difference
<dominikh> is it a big difference in terms of complexity?
<companion_cube> marler8997: there would still be basic generics with comptime, anyway.
<companion_cube> No, but in terms of perf it can be
<marler8997> companion_cube, semantically yes (but remember, could be optimized)
<marler8997> dominikh, it means removing a big feature of the language
<companion_cube> Except, not
<dominikh> also, it really can't work without runtime allocations, unless you forbid passing AnyTypes around. and forbidding that would be a big deficit
<companion_cube> How could it be optimized if it's runtime?
<companion_cube> You get into JIT land there
<companion_cube> It's a whole different world
<marler8997> companion_cube, it could be optimized into exactly what generics do today
<marler8997> dominiky, runtime allocations optimized into no allocations
<dominikh> can I do `var myGlobal: AnyType = undefined; fn foo(x: AnyType) void { myGlobal = x; }`? if yes, you need allocations. if no, AnyType is a bad idea.
<companion_cube> So you'd still monomorphize for each different type?
<companion_cube> What even is the point?
<marler8997> dominkh, yes there will be examples where runtime allocations may be necessary
<dominikh> if runtime allocations are necessary, there needs to be a way for the user to do said allocation.
<marler8997> companion_cube, the point is to remove a big feature of the language to make it simpler
<dominikh> and since the optimizer is _optional_ (and fallible), the user needs to do that allocation all the time
<dominikh> not just when the optimizer can't figure things out
<dominikh> a code change in one function can change what the optimizer can do a thousand lines away
<marler8997> dominikh the user needs to do allocation all the time because the optimizer is fallible?
<marler8997> huh?
<marler8997> dominiky, yes, changing code in one place can change the entire program!
<dominikh> without the optimizer, `foo(100)` needs to allocate storage for the pointer in AnyType
<dominikh> and since allocations are explicit in Zig, the user has to make that allocation
<marler8997> that could definitely be one of the weaknesses, the programmer has "less" control over how the final output is manifested
<marler8997> dominikh yes, then no
<dominikh> the optimizer changes between versions of Zig. will my code stop compiling if a new version can no longer optimize the pointer away?
<marler8997> Zig allocates an lvalue on the stack and passes a pointer to it...Zig already supports this behavior
<dominikh> you can't use a stack pointer if the pointer outlives the stack; as it does in my example
<marler8997> dominikh, optimizer has nothing to do with whether something compiles
<dominikh> where the AnyType gets stored in a global
<dominikh> marler8997: it does when you depend on the optimizer to remove the need for allocations…
<marler8997> dominkh, we aren't depending on the optimizer to change the semantics of the source code
<dominikh> I will repeat my example: `var myGlobal: AnyType = undefined; fn foo(x: AnyType) void { myGlobal = x; }` – how will the user call foo, with the argument 100, making sure the value is allocated on the heap (because it has to be on the heap.)
<marler8997> you example is incomplete
<marler8997> how is myGlobal being used?
<marler8997> if it's not being used, then nothing gets generated
<dominikh> fine, export it then
<dominikh> now it's always used
<marler8997> what do you mean by "export" specifically?
<dominikh> export as in the 'export' keyword.
<marler8997> so you want to make AnyType available to C code?
<dominikh> ... just assume that the variable gets used in an arbitrary number of places. it doesn't matter how exactly. it'll get used, its usage will be dynamic
<marler8997> not necessarily
<dominikh> the value passed to the function will outlive the stack frame
<dominikh> that's the point
<marler8997> if foo gets called once with a comptime_int of 100
<dominikh> _assume for the sake of argument_. I don't care to write a 100 line program to demonstrate to you that a value passed to a function can get stored in a global variable...
<marler8997> then myGlobal will be of type comptime_int
<dominikh> don't tell me all the special cases that can work. explain how the general case works
<marler8997> it won't even have memory at runtime
<companion_cube> I know zig is not rust
<companion_cube> But what rust does for this is interesting
<companion_cube> You need to specify in types when you want dynamic dispatch
<marler8997> dominikh, I followed your constrant and myGlobal being of type comptime_int adheres to it
<marler8997> the value passed to the function is stored in a global
<marler8997> the next contains could be, assume foo is called with a runtime value yes?
<marler8997> *the next constraint
<dominikh> sorry, but I've lost interest in this conversation
<marler8997> sorry, I'm a math guy so I've very explicit, and it can slow things down
<marler8997> I like where you're going with this, I'm just trying to be very precise
<dominikh> you're not being explicit at all. your description of AnyType is woefully underspecified
<marler8997> you're correct
<marler8997> you didn't ask what it was :)
<marler8997> but everyone here seems to have strong opinions on it
<g-w1> maybe a github proposal just to formalize it would be better than an argument on irc
<marler8997> it's not a proposal though
<marler8997> I just want a discussion on how it would affect the language
<dominikh> the point of the matter is that there exists a program where the concrete type is not known at compile time and the value outlives its stack frame, needing to live on the heap.
<marler8997> what dominkh is doing is what I'm looking for
<marler8997> dominikh I agree but we need to nail down a specifc example to address it....foo(argv[0]) for example
<marler8997> foo(argv[0]) and foo(parseInt(argv[1]))
<marler8997> so foo is called with a runtime slice and a runtime u32 lets' say
<marler8997> this would force myGlobal to handle both slices and u32
<marler8997> the ideal optimizer would turn myGlobal into a tagged union union { Slice: []const u8, Int: u32 }
<dominikh> please stop mentioning the optimizer :)
<marler8997> and the function foo, could support this tagged union, or could be generated multiple times
<dominikh> but give me a second and I'll give you a straightforward example that doesn't even need multiple types
<marler8997> dominky, you're wanting to discuss the runtime implications of this
<companion_cube> Oh that's straight up whole program optimization, too
<marler8997> you can't do tha without talking about the optimizer
<marler8997> companion_cube, yes
<companion_cube> Compile times go brrrr
<dominikh> if you can't avoid the optimizer, then the idea is not worth discussing IMO.
<marler8997> dominkh, well the whole idea depends on the optimizer to maintain Zero Cost Abstraction
<marler8997> if we can't maintain Zero Cost Abstraction and remove generics, then the whole idea is DOA
<dominikh> we already established that performance is irrelevant when discussing the semantics
<dominikh> we care about inputs and outputs, not how fast it goes
<dominikh> well then it's DOA
<marler8997> dominkh, you're saying we can't support Zero Cost Abstraction with this?
<companion_cube> Imho there's perf without an optimizer
<companion_cube> And perf with it
<companion_cube> And both are important
<dominikh> the optimizer can't optimize _all_ cases, which means either a) explicit allocations b) falling back to monomorphization, which doesn't make the compiler any simpler than it currently is
<companion_cube> This idea fails in the no optimizer case
<marler8997> the point isn't making the compiler simpler, it's making the language simpler by removing this feature. I dont' know if it makes the compiler simpler or not
<marler8997> companion_cube, what fails?
<marler8997> I'm sure there are issues but I'm looking for an example to demonstrate them
<companion_cube> It would be terrible without a very smart optimizer
<companion_cube> Also, not explicit
<marler8997> dominkh I want to address your issues but I need an example
<companion_cube> Zig is explicit
<marler8997> what isn't explicit?
<companion_cube> The behavior of your idea
<companion_cube> If it allocates
<marler8997> Zig already does this
<companion_cube> Is it's static dispatch
<dominikh> not on the heap it does not.
<companion_cube> Does it? Where?
<marler8997> neither does this allocate on the heap
<marler8997> the stack
<companion_cube> Well you said it might be a pointer, and maybe bit
<companion_cube> Not *
<marler8997> to be able to support AnyType, it must expose it as a pointer
<marler8997> probably look something like this: const AnyType = struct { fn typeInfo() TypeInfo; fn ptr() *opaque{} }
Akuli has joined #zig
TheLemonMan has joined #zig
<g-w1> should I report a bug for this https://share.olind.xyz/f.php?h=3eQZAncR&p=1 ?
dumenci has quit [Remote host closed the connection]
dumenci has joined #zig
<TheLemonMan> g-w1, EFAULT means the buffer is bogus
<TheLemonMan> use gdb and print the slice fields
<ikskuh> okay, i'm back
<ikskuh> marler8997, is the AnyType still a topic?
<g-w1> ah, thanks
<ikskuh> the "pointer + type info" will only work well in case 1), every other case will be more or less borked, assuming AnyType is a regular type and not a keyword
a_chou has joined #zig
a_chou has quit [Remote host closed the connection]
<marler8997> ikskuh yeah definitely
<marler8997> I'm playing around with code
<ikskuh> i don't think it's a good idea. it won't make the language simpler and will require a lot more boiler plate
<ikskuh> and functions like std.math.max will be harder to use
<marler8997> ikskuh maybe, not necessarily just wondering if it's a good idea though
<marler8997> I'm wondering what all the implications are
<ikskuh> yeah
<ikskuh> i hope i made some of them clear in my snippet :)
<marler8997> reading...
<marler8997> I think case 2 would have to return AnyType
<marler8997> I don't see the issue with case 2 though
<marler8997> for case 3, T is AnyType
<marler8997> I don't see what the issue is though
<ikskuh> var foo = makeFoo(@as(u32, 10));
<ikskuh> foo now contains a dangling pointer to a temporary
<ikskuh> with current anytype semantics it doesn't
<marler8997> thinking...
spiderstew has joined #zig
<marler8997> yeah, that simple example does pose an issue
<marler8997> we require using pointers/references because without generics we need a runtime representation of all types, but now how do we copy values...
<marler8997> because all AnyType values are references....
<ikskuh> yep
<ikskuh> that's what i meant with footgun
<marler8997> right that's a no go
<ikskuh> it's a hidden/implicit pointer
<marler8997> how do you solve that?
<ikskuh> in that case? i think you don't
<ikskuh> you need to init the AnyType from a pointer
<ikskuh> otherwise it will break
<marler8997> ikskuh, well that's just my idea, the question is, how do you solve this problem in a language without generics
<marler8997> throw any of my suggestions out the window if needed
<marler8997> obviously the optimizer could turn an AnyType into values
<ikskuh> Go/Python solve it via garbage collection
<marler8997> but maybe there needs to be some way for AnyTypes to become values types semantically
<ikskuh> this is only true for anything that is in a function
<ikskuh> you are not even allowed to turn it into values for arguments ;)
<ikskuh> because that would violate ABI
<marler8997> that's the other solution, maybe you assume GC and let the optimizer remove it...
<ikskuh> (even an implicit ABI)
<marler8997> sorry we're saying too many things and I'm not matching up our responses
<dominikh> No GC in Zig, please and thank you
<ikskuh> the optimizer can't do shit here. if you put an AnyType into a struct, it will always be a pointer
<ikskuh> there's no way to "optimize" that away
<ikskuh> it would be a heavy change of semantics
<ikskuh> breaking all assumptions
<marler8997> dominkh, again, not talking about proposing features, just discussing how features affect the language
<dominikh> well, GC affects the language negatively :P
<ikskuh> it's not possible to have an "any" type like in dynamic languages without allocations
<ikskuh> either you also pass an allocator to AnyType
<marler8997> ikskuh, you could be right but I'm not sure, for example, Andres suggested to me that the Allocator interface could get completely optimized away
<ikskuh> only for inlining
<ikskuh> when the compiler recognized that the function is better off being inlined
<marler8997> dominkh, yes I agree, again, just discussing implications
<ikskuh> if you store the function in a function pointer
<ikskuh> all of that is gone
<ikskuh> the optimizer is not all-mighty
<ikskuh> going to give you a neat demo :)
<ikskuh> and small
<dominikh> can the optimizer solve the halting problem for me?
<marler8997> ikskuh yes that's what I thought yesterday, but thinking about it, I think you could actually optimize dynamic dispatch away
<marler8997> with whole program optimization
<ikskuh> no, even then, you can't
<companion_cube> whole program optimization is the surest recipe to insane compilation times
<dominikh> even whole program optimization cannot determine all runtime behavior.
<ikskuh> that would be solving the halting problem
<companion_cube> look at MLton for an idea.
<marler8997> dominikh, I don't think your comments are being productive here, I'm here to discuss and learn
<dominikh> Saying that something is objectively impossible is not productive?
<companion_cube> dominikh: you can certainly do some cool things in whole program optimizations, like turning a dynamic type into a sum type
<companion_cube> also, remove higher order functions (closures) in the same way
<companion_cube> it's just very costly
<marler8997> goodness, people, I'm not saying these things are good ideas, I'm trying to talk about how these features affect the language, good or bad
<dominikh> companion_cube: at least if you don't allow runtime type creation, yes
<dominikh> You keep saying that you're not proposing an idea, but any time someone points out the bad implications, you feel personally attacked...
<ikskuh> error: unrecognized optimization mode: 'ReleaseFat'
<dominikh> The idea has flaws, we're pointing them out, and you wave your hands
<ikskuh> whoops :D
<justin_smith> marler8997: I think they are plainly describing how the change you describe would break the language semantics
<marler8997> I'm talking about how these features would affect the language and then people are coming back with, "yeah but I don't like that idea because X"
<fengb> The opposite of ReleaseSmall eh?
<marler8997> the purpose isn't to know whether they are good ideas, it's how would these features affect the language...
<dominikh> you're the only person using the word "like". everybody else is objectively pointing out how it breaks the language and its design goals
<companion_cube> dominikh: ah, yes. runtime type creation makes 0 sense to me so I ignore it…
<companion_cube> ikskuh: ReleaseSlim
<dominikh> companion_cube: yeah, I've never really had a need for it, either.
<marler8997> dominkh, yes, I'm not saying these ideas don't break the language,
<companion_cube> to me it doesn't really make sense at al
<companion_cube> all*
<companion_cube> (unless you have the whole compiler at runtime?!)
<justin_smith> marler8997: right, they effect the language by breaking it (or creating some very weird special cases, arguably equivalent)
<ikskuh> oh wait :D
<marler8997> justin_smith, again that's not the point whether these ideas break the language
<marler8997> I'm not proposing features
<marler8997> I'm trying to have a discussion
<dominikh> what is the point in discussing a change that is not viable to begin with?
<ikskuh> this is the correct one
<marler8997> dominkh to learn
<ikskuh> dominikh: learning language semantics and how features affect other things
<marler8997> to understand what makes features good or bad
<ikskuh> i think marler8997 didn't see the "dangling pointer" problem
<marler8997> to gain mutual understanding
<dominikh> I'll let you two have fun, then.
<justin_smith> marler8997: it's not the idea that breaks the language, it's the feature as you describe it, and other people are explaining why it would break the language, I don't see the problem
<marler8997> justim_smith, I think you're right
<marler8997> but I want to understand specifically how it breaks the language
<marler8997> that's what the discussion is for
<justin_smith> marler8997: so you want an explanation in abstract terms, instead of specific examples of things that would not work?
<companion_cube> it breaks some properties such as: fast compilation, explicit allocations, semantics not relying on an optimizer
<marler8997> justin_smith both are good
<ikskuh> Rust has the "semantics rely on the optimizer" a lot
<dominikh> it does?
<ikskuh> yeah
<marler8997> but some of the things people bring up don't necessarily apply
<dominikh> got an example?
<dominikh> (out of curiosity; I'm not well versed in Rust)
<ikskuh> a lot of Rusts zero-cost abstraction cost a shitton of instructions when not having optimized builds
<companion_cube> ikskuh: why does it?
<dominikh> okay, but that's not really a semantic issue, is it?
<companion_cube> I mean, it needs inlining, but what language doesn't now?
<marler8997> iksuh what is your example trying to show?
<justin_smith> companion_cube: maybe something like TCO counts here? just speculating
<companion_cube> inlining is definitely not in the same category as "trying to see through an AnyType"
<ikskuh> marler8997: optimizer cannot optimize indirection with side effects
<companion_cube> TCO is not guaranteed in rust or zig…
<dominikh> to be fair, there's probably at least some devirtualization, too
<marler8997> iksuh, yes I agree, I don't need to see an example to agree with that
<companion_cube> in languages that rely on it it's not an optimizer thing
<ikskuh> companion_cube: TCO?
<companion_cube> tail call optimization
<companion_cube> a very bad name
<ikskuh> ah
<companion_cube> it's not an optimization if it's guaranteed by the language (ML, scheme, etc.)
<marler8997> sorry I'm a bit lost on our thread, what was the point you were making?
<ikskuh> yeah, luckily zig has it as an opt-in feature :)
<justin_smith> ikskuh: rewriting the return location on the last call, and reusing the same stack frame
<ikskuh> marler8997: was just talking about that the optimizer cannot optimize certain things
<marler8997> ikskuh, yeah I agree
<marler8997> was that fact up for debate somehow?
<justin_smith> sorry, bringing up TCO was a red herring, just a random example of an optimization that would break things if you relied on it and it was turned off
<marler8997> the question I thought we were on was is there a way to allow AnyType to decompose to values without using generics?
<ikskuh> marler8997: you assumed that the compiler might optimize away all AnyType
<ikskuh> which isn't possible
<ikskuh> that was my point
<marler8997> I assumed the compiler might optimize away all AnyType?
<marler8997> in some cases yes, in some cases no
<ikskuh> ah
<ikskuh> that wasn't clear to me
<marler8997> gotcha
<novaskell> seems like a lot more work for the compiler
<marler8997> I bring the optimizer up because I think ZeroCost abstraction is very important and necessary
<companion_cube> justin_smith: TCO is a bit my pet peeve, exactly because when you rely on it it's not an optimization :s
<ikskuh> zero cost abstractions are not an optimizer thing though
<ikskuh> they are orthogonal
<marler8997> Removing generics means we won't have Zero Cost abstraction unless..there is a path for the optimizer
<marler8997> ikskuh...uh what?
<marler8997> zero cost abstractions is orthogonal to the optimizer...?
<ikskuh> the abstraction is not zero-cost if it is required to be optimized away
<ikskuh> yeah
<ikskuh> one good example is C++ classes
<marler8997> so...what does C++ mean by zero cost abstractions?
<ikskuh> inheritance is a zero-cost abstraction
<companion_cube> templates, too
<ikskuh> yeah
<ikskuh> it takes exactly the same amount of instructions to use the zero-cost-abstraction than coding it by hand
<marler8997> C++ prides itself on zero cost abstractions
<ikskuh> if you require the optimizer, it's not zero-cost
<marler8997> ikskuh
<marler8997> what?
<marler8997> so you're saying c++ doesn't have zero cost abstractions?
<ikskuh> it has
<ikskuh> coding a class by hand
<ikskuh> is the same code
<marler8997> but it requires the optimizer for them to be zero cost
<ikskuh> as using jjust "class Foo {} "
<ikskuh> no
<ikskuh> it does not
<ikskuh> same for templates
<marler8997> huh?
<ikskuh> i can write one function per instantiation
<dominikh> marler is probably referring to things like iterators, which are proclaimed to be zero cost
<ikskuh> or i can use a template
<ikskuh> same cost in the end
<marler8997> you don't need to get that complicated
<marler8997> very simple example is a function call
<ikskuh> that's always non-zero cost
<marler8997> you can abstract logic by putting it into a function, but that has a runtime cost
<ikskuh> the optimizer auto-inlining that call is an improvement, not an abstraction
<marler8997> it only has zero cost when the optimizer can optimzie it away
<ikskuh> yes, but that's not an abstraction
<ikskuh> you can also just use macros if you wnat to skip the overhead for a function call
<marler8997> a function is an abstraction of embedding code directly
<marler8997> it's an abstraction in that it abstractions the context of the caller away from what the function is doing
<dominikh> it's not zero cost, however.
<companion_cube> I agree that rust's iterators are not really 0-cost since they need inlining to be fast
<ikskuh> <dominikh> it's not zero cost, however.
<ikskuh> templates and classes are zero-cost abstractions
<marler8997> it is a "zero cost abstraction" which means that if it doesn't semantically need to be a function at runtime, the compiler can choose to inline it
<ikskuh> that's an optimization
<marler8997> yes
<ikskuh> not an abstraction
<marler8997> but you said that zero cost abstractions and the optimizer are orthogonal
<ikskuh> as said: abstractions are orthogonal to optimizations
<dominikh> the compiler can optimize away addition of two numbers, too. doesn't mean it's zero cost
<marler8997> and I'm giving you counter examples to that
<ikskuh> inlining a function is not an abstraction for the programmer
<marler8997> zero cost is referring to runtime
<dominikh> they are not counter examples. they are examples of abstractions that are not zero cost, that the optimizer can sometimes optimize away
<dominikh> a zero cost abstraction does not need an optimizer to be zero cost
<marler8997> dominkh, you just proved my point
<ikskuh> <dominikh> a zero cost abstraction does not need an optimizer to be zero cost
<dominikh> no, I did not.
<ikskuh> this is the point
<marler8997> my point is that zero cost abstractions and the optimizer are not orthogonal
<ikskuh> inlining requires an optimizer
<marler8997> and you said, it's only zero cost if the optimizer inlines it
<marler8997> thus, you proved my point
<dominikh> yes they are orthogonal. a zero cost abstraction is always zero cost, whether there is an optimizer or not
<ikskuh> but then it's not an abstraction, but a optimization
<dominikh> something being zero cost does not imply it's a zero cost abstraction...
<ikskuh> ^=
<marler8997> c++ relies on the optimizer to claim it supports zero cost abtraction
<dominikh> and we would disagree with C++
<marler8997> but you're saying it doesn't?
<marler8997> "we would disagree with C++"? what?
<companion_cube> some C++ features are 0-cost even without optimizer
<companion_cube> templates are expanded no matter what
<marler8997> companion_cube sure
<marler8997> but ikskuh claims they are orthogonal
<marler8997> not "orthogonal sometimes"
<companion_cube> templates are expanded always
<companion_cube> not sometimes
<companion_cube> I think that's ikskuh's point
<marler8997> yes templates are an example of a zero cost abstraction that doesn't use the optimizer
<marler8997> but he is claiming that all zero cost abstractions are orthogonal to the optimizer
<dominikh> ikskuh posits that an abstraction is not a zero cost abstraction if it requires the use of the optimizer. it's a costly abstraction that sometimes gets optimized away.
<marler8997> dominkh, you just proved my point again
<companion_cube> I kind of agree…
<marler8997> if it got optimized away, then it became a zero cost abstraction
<justin_smith> even in the c++ community not everyone agrees about what "zero cost abstraction" means https://isocpp.org/blog/2020/07/cppcon-2019-there-are-no-zero-cost-abstractions-chandler-carruth
<dominikh> no, it became a zero cost implementation
<dominikh> there's nothing abstract about it anymore
<marler8997> zero cost means you abstracted details away, but didn't pay a runtime penalty
<dominikh> and if you keep claiming that I proved your point when I haven't, at least spell my name correctly :/
<companion_cube> but if it's only on the whims of an optimizer, you can't really rely on it
<marler8997> companion_cube yes of course
<companion_cube> marler8997: domi<tab> will help
<marler8997> but that's not what we're discussing
<companion_cube> is it not?
<marler8997> ikskuh made a very bold claim that zero cost abstrcations have NOTHING to do with optimization
marnix has quit [Ping timeout: 260 seconds]
<marler8997> that they are orthogonal
<ikskuh> which i still hold
<marler8997> and I gave a very simple couter example
<marler8997> the function
<ikskuh> if it requires an optimizer to make an abstraction zero-cost, it's not a zero-cost abstraction
<marler8997> funtion inlining is a counter example to that claim
<marler8997> ikskuh what do you mean by that?
<marler8997> what is your definitino of "zero cost abstraction"?
<ikskuh> the point is: you can't toggle the zero-constness of an abstraction
<novaskell> a function is not zero cost though marler8997 unless you implement all "functions" as plain macro expansions
marnix has joined #zig
<marler8997> novaskell, with the optimizer a function call can generate runtime code equivalent to a macro
<marler8997> that's the whole point
<companion_cube> a feature that requires inlining to be 0-cost, isn't 0-cost
<ikskuh> "zero-cost abstraction is an abstraction that i can use which helps me with things which i could otherwise do by hand and not giving any additional runtime cost, even in the absence of an optimizer"
<companion_cube> that's the idea
<marler8997> again
<novaskell> yes, it 'can' but that's not a guarantee
<ikskuh> marler8997: you asume an optimizer
<marler8997> zero cost doesn't mean 0 cost on everything
<marler8997> it means "no runtime penalty"
<ikskuh> yes
<companion_cube> I think it'd be ok if there was a "inline" keyword that refuses to compile if it's not inlined
<companion_cube> not just a heuristic
<ikskuh> no runtime penalty in the case of NO optimizer
<marler8997> I assume an optimizer?
<marler8997> what?
<novaskell> it means it's equivalent to writing the specialization by hand
<dominikh> companion_cube: coincidentally a Zig proposal ;)
<ikskuh> if you assume an optimizer is there, most stuff in most languages get zero-cost
<novaskell> and is a 1:1 mapping of that
<ikskuh> but that doesn't make them a "zero cost abstraction" as the abstraction itself has still a cost, it just gets optimized away later
<marler8997> ikskuh, I'm not sure what that has to do with discussing your claim
<dominikh> calculating 10 digits of pi is "zero cost" if LLVM gets crazy enough
<companion_cube> ikskuh: not sure about "most" :p
<marler8997> your claim was, Zero Cost Abstraction has nothing to do with optimization
<ikskuh> yes
<novaskell> optimization depends on a larger context thus isn't a 1:1 mapping of what you could have written
<ikskuh> and you say: "with an optimizer …"
<ikskuh> remove the optimizer
<marler8997> so maybe our definitions are just different
<marler8997> what do you mean by "Zero Cost Abstraction"?
<ikskuh> abstractions that have no additional cost when having NO optimizer
<ikskuh> like constructors
<marler8997> lol
<ikskuh> Object o;
<ikskuh> Object o = Object_new();
dumenci has quit [Ping timeout: 256 seconds]
<ikskuh> same difference in the end
<marler8997> you definitions infers your claim :)
<marler8997> so now I see where we were differing
<marler8997> however
<ikskuh> otherwise the abstraction is not zero-cost
<marler8997> that's not what C++ means by Zero Cost Abtraction
<ikskuh> or are you telling me that addition is a zero-cost abstraction?
<marler8997> you're using a custom definition I'm not familiar with
<ikskuh> c++ has another definition nobody has talked about yet :D
<ikskuh> "you don't pay for it when you don't use it"
<marler8997> The definition C++ uses, is it is an Abstraction that has no runtime cost
<ikskuh> yes, that's the same definition for me
<marler8997> the runtime cost could be taken care of by the optimizer, or templates, whatever
<ikskuh> because C++ standard doesn't define optimization
<ikskuh> optimizers are an implementation detail
<marler8997> the definition doesn't specify what component removes the runtime cost
<marler8997> but your definition did
<ikskuh> or is there a part of the c++ spec where optimizations are mandatory?
<marler8997> your definitly specifically excluded the optimizer for some reason...
<ikskuh> ye
<ikskuh> because i'm talking about language level
<marler8997> I don't know where you got your definition from though
<ifreund> he's saying that in debug mode c++ compilers aren't compliant technically
<ikskuh> and i'm not considering implementation details like optimizers
<ifreund> if I understand correctly
<marler8997> but if that's your definition, then your claim follows
<ikskuh> if an abstraction has cost in unoptimized mode, it's obviously not zero-cost
<marler8997> ikskuh, well that followd by your definition of zero cost
<marler8997> but again, that's not the definition C++ uses
<ikskuh> yeah, which is "no runtime cost"
<ikskuh> c++ spec doesn't require an optimizer
<marler8997> I didn't say it requires
<ikskuh> yes
<marler8997> I said, it's definition of Zero Cost Abstraction doesn't exclude the optimizer
<ikskuh> → if the abstraction is zero-cost in the spec, it is zero-cost without optimizer
<companion_cube> C++ doesn't have an official definition of "zero cost abstraction"
<companion_cube> it's more of a marketing term
<marler8997> ikskuh oh?
<companion_cube> or a design goal
<marler8997> Maybe we need to look this up and see what C++ is claiming
<ikskuh> probably nothing :D
<justin_smith> ikskuh: I like your definition of "zero cost" - it actually makes sense in terms of the meanings of those words and what I'd want from a good compiler
<companion_cube> from a good language*
<companion_cube> it shouldn't depend on the compiler :p
<marler8997> yeah that would be a cool feature
<justin_smith> but as far as I can tell, "zero cost abstraction" was invented as a marketing term to sell c++, and the way they use the term relies on optimization
<marler8997> but ikskuh says that's not what C++ is saying?
<companion_cube> justin_smith: for some features, not others
<marler8997> companion_cube of course
cole-h has joined #zig
<marler8997> but ikskuh is claiming Zero Cost Abstraction is orthogonal to optimization
<companion_cube> in his definition yes
<marler8997> no, in C++'s definition
<ikskuh> c++ doesn't define optimizations :D
<ikskuh> so it must be orthogonal ^^
<companion_cube> well the definition "no additional runtime cost" is kind of inconsistent with what C++ claism to be 0-cost…
<companion_cube> if it depends on a Sufficiently Smart Compiler™
<marler8997> When a C++ programmer says "Zero Cost Abstraction"
<marler8997> you're saying that an optimizer optimizing away function calls, classes, indirction, vtables, none of that is included in Zero Cost Abstraction
<marler8997> is that right?
<ikskuh> classes are actually zero-cost abstractions
<ikskuh> they don't impose additional runtime cost
<ikskuh> compared to coding it yourself
<marler8997> If you still hold that, I'm pretty sure I can find some videos from Stroustrup where he talks about Zero Cost Abstraction in terms of function inlining and vtables
<ikskuh> vtables are zero-cost abstraction, you can do that by-hand as well :)
<companion_cube> vtables are 0 cost
<marler8997> ikskuh, you didn't answer the question
<marler8997> I specially asked about the "optimizer"
<ikskuh> and i hold to that
<companion_cube> vtables don't need an optimizer, nor do templates, once again
<marler8997> oh my goodness
<marler8997> I'm not making a claim that all zero cost abstractions require the optimizer
<marler8997> I'm countering the claim that C++'s Zero Cost Abstractions exclude the optimizer entirely, that they are orthogonal
<ikskuh> > What you don’t use, you don’t pay for. And further: What you do use, you couldn’t hand code any better.
<ikskuh> that's stroustrups definition
<marler8997> yes!
<marler8997> how can that not include the optimizer!
<ikskuh> because it is still true for "no optimizer"
<dominikh> I can hand-code things better than an optimizer that failed to optimize something away.
<companion_cube> it must be true of any compilers
<companion_cube> compiler*
<ikskuh> ^=
<marler8997> because it is still true for "no optimizer"?????
<ikskuh> yes
<ikskuh> when i use the same language and code it by hand
<ikskuh> i can get the same result
<marler8997> I don't even know what to say to that
<marler8997> I can't
<ikskuh> if i use c++ and implement vtables by hand
<ikskuh> i get the same result as using classes and inheritance
<ikskuh> sure
<ikskuh> if you switch language of implementation
<ikskuh> (meaning: you switch over to assembly)
<ikskuh> we can get better perf in some cases
<companion_cube> marler8997: it means the optimizer can't fuck it up
<marler8997> you guys are arguing against a claim I am not making
<companion_cube> it seems pretty desirable? you can _rely_ on such features
<marler8997> I am not saying that Zero Cost Abstractions require the optimizer
<ikskuh> you say that some of them might
<marler8997> I'm not saying it should or should not rely on the optimizer
<marler8997> ALL I AM SAYING, is that Zero Cost Abstractions does not exclude the optimizer
<dominikh> we are saying it should not rely on the optimizer.
<marler8997> dominkh, THAT IS FINE
<companion_cube> that was the whole point
<marler8997> I never said it shouldn't rely on the optimizer
<vesim> Hi, is there way to build slice from pointer([*c]T) and size? I need to somehow rebuild that type to free the memory.
<dominikh> saying "does not exclude the optimizer" directly opposes "must not use the optimizer"
<dominikh> we're well aware of what's being argued; are you?
<marler8997> whether or not zero cost abstractions should rely on the optimizer is actually a very interesting topic
<ikskuh> vesim: ptr[0..x]
<marler8997> However, ikskuh claimed that Zero Cost Abstrcations and the optimizer are orthogonal, in C++
<ikskuh> and i say: they are not zero-cost if an abstraction assumes a implementation detail
<marler8997> ikskuh yes I can agree with that
<ikskuh> for an abstraction to be zero-cost, it must hold true for *all* compilers
<marler8997> "zero-cost"
<ikskuh> can we agree on that?
<marler8997> but that's not what Zero Cost Abstraction means
<vesim> ikskuh: ouch, thats was easy, thanks
<ikskuh> if i cannot rely on an abstraction to be zero-cost, it is not zero-cost
<ikskuh> which means it must be true for all compilers that using this abstraction does not impose additional runtime cost
<ikskuh> can you agree on that?
<marler8997> it depends on what you mean by zero-cost...all you're doing is trying to define what you mean by "zero-cost"
<ikskuh> "zero cost" → "not imposing any additional runtime cost"
<marler8997> you're saying that you're definition of 0-cost means it must be guaranteed
<companion_cube> yes
<marler8997> which I think is GREAT!!!!
<companion_cube> that's quite reasonable, isn't it?
<ikskuh> it isn't zero cost if it is only zero-cost by accident
<marler8997> and that's actually one of the biggest weaknesses I see with removing generics
<marler8997> ikskuh, again, you're just definining what "zero-cost" means to you, but be careful because that's not what it means to everyone
<companion_cube> it's hard to tell what people mean by that
<marler8997> if that's the defintion you use, then C++ is definitely NOT zero cost
<ikskuh> a lot of c++ features are zero-cost actually :)
<marler8997> yes I agree
<marler8997> but C++ claims alot of abstractions are zero cost that could only be so with optimization
<companion_cube> where is this claimed?
<marler8997> I recall this video: https://www.youtube.com/watch?v=zBkNBP00wJE
<companion_cube> I mean, what feature of C++ is claiemd to be 0-cost but would only be with optims?
<companion_cube> (I think rust definitely fails on this one because of iterators)
<marler8997> I think c++ iterators are one feature that requires optimization
<marler8997> function inlining is another
<companion_cube> function inlining is not a "0 cost abstraction"
<companion_cube> it's clearly just an optimizer hint
<dominikh> you guys are going in circles now
<companion_cube> functions cannot be 0-cost
<marler8997> not the inline keyword
<dominikh> the inline keyword is a hint
<companion_cube> sure, it's not an abstraction though
<marler8997> the act of the optimizer inlining a function can make abstracting code into a function a Zero Cost Abstraction
<marler8997> a function is an abstraction
<companion_cube> but it's clear that all functions cannot be 0 cost
<marler8997> companion_cube of course
<companion_cube> contrast that with classes, whihc are 0-cost
<marler8997> No one ever said they were
<companion_cube> (compared to doing them by hand like in C)
<companion_cube> so functions are not a 0-cost abstraction, end of story
<marler8997> companion_cube, I don't know what you mean by all classes are 0 cost abstractions
<companion_cube> well, they're no more costly than struct + functions
<companion_cube> hence 0 cost
<marler8997> classes are structs
<marler8997> they are the same thing
<marler8997> classes are not an abstraction of structs
<companion_cube> hence - cost
<companion_cube> 0 cost
<dominikh> in what world are functions an abstraction but classes aren't
<marler8997> goodness
<marler8997> functions are an abstractions of hand-written inline code
<marler8997> classes are not an abstraction of structs, classes are structs
<dominikh> classes aren't structs
<dominikh> what struct has methods
<dominikh> or inheritance
<marler8997> they are literally the same thing in C++
<companion_cube> "C with classes" definitely implies that it's higher level than plain C
<marler8997> the only difference, is the default visibiily
<companion_cube> ie the class/struct with methods is more abstract than C's plain structs
<marler8997> structs have a default visibility of public and classes default visibility is private
<companion_cube> keywords notwithstanding
<fengb> Zig structs has methods 🙃
<companion_cube> yeah we know.
<companion_cube> (@ marler8997)
<companion_cube> what classes are an abstraction of, is a struct with function pointers inside (for virtual methods)
<ikskuh> <dominikh> classes aren't structs
<ikskuh> in C++ they are :D :D
<marler8997> that's not what "classes" are, that's a virtual method
<marler8997> structs can also have virtual methods in C++
<companion_cube> sure, let's ignore the whole object-oriented terminology
<companion_cube> jeez
<marler8997> because structs and classes are the same in C++
<companion_cube> yeah it's a *terrible* choice of name
<dominikh> okay, cool. structs are a zero cost abstraction, then
<earnestly> With only visibility reversed
<marler8997> ok, abstraction is a relative term
<companion_cube> oppose C++ classes to C struct, ok?
<companion_cube> that's where the difference is
<marler8997> when you use it, you need a specific implementation in mind from which the abstraction is derived
<marler8997> companion_cube
<marler8997> classes are not an abstraction so POD structs
<marler8997> that's not what abstraction means
<companion_cube> they are an abstraction of OO classes, compared to doing it with C struct + functiom pointers
<marler8997> abstraction doesn't mean, thing X does more things than thing Y, so X is an abstraction of Y
<companion_cube> which is the only way to do OO in C
<companion_cube> if you want "object oriented" classes in C you need struct + function pointers, which is quite manual
<companion_cube> C++ gives you a 0 additional cost way: classes with vtables
<companion_cube> that's where the 0 cost is
<companion_cube> it's as cheap as doing it by hand in C
<marler8997> again
<marler8997> abstraction is a relative term
<marler8997> when you say something is a "0 cost abstrcation"
<companion_cube> it's an abstraction over struct+ fun pointer; with 0 additional cost
<marler8997> you are saying that coding X will yield no runtime cost than coding Y
<companion_cube> yes
<marler8997> so what specifically are you saying?
<companion_cube> exactly that
<marler8997> C structs and C++ Classes are two different things
<companion_cube> C++ classes are a 0 cost abstraction over C struct + function pointers
<marler8997> C++ classes aren't an abstraction of C structs
<companion_cube> yes they are
<companion_cube> the name is there for backward compat
<marler8997> you need to be more specific
<marler8997> what specific code are you saying is being abstracted
<earnestly> How is 'struct + function pointers' not specific?
<marler8997> for example
<companion_cube> marler8997: the whole vtable is abstracted!!!!
<marler8997> I can give you an inline function example
<companion_cube> you don't have to write a vtable
<companion_cube> C++ does it for you
<companion_cube> that's the abstraction
<marler8997> the code a+b
<marler8997> could be abstracted with int foo(int a, int b) { return a + b; }
wootehfoot has joined #zig
<marler8997> foo(a, b)
<marler8997> so using foo(a, b) is an abstraction of a+b....but if the optimizer inlines it, then now it is a zero cost abstraction
<marler8997> ok, so just provide me with the specific code and the abstracted code of what you're trying to say
<companion_cube> it's optimized away, it's not 0 cost
<dominikh> round and round we go…
<companion_cube> https://en.cppreference.com/w/cpp/language/virtual there you go, I don't have the patience to write an example
<marler8997> again, that's what C++ means by Zero Cost Abstraction
<marler8997> it doesn't conform to ikskuh's definition of Zero Cost
<companion_cube> no that's what it means by optimizing
<marler8997> but it's not the same as the C++ definition
<companion_cube> inlining a function is an optimization
<companion_cube> it may or may not happen
<marler8997> yes
<companion_cube> yeah dominikh you're right
<companion_cube> I give up
<marler8997> irrelevant to whether it's a zero cost abstraction
<marler8997> according to the C++ definition
<justin_smith> marler8997: one function call's cost can be optimized away, but "the abstraction" here is "function call", which cannot in the general case be zero cost, even in c++ definition, you can't call an abstraction itself 0 cost if only certain highly constrained usages can be optimized
<marler8997> can you provide me with the example?
<marler8997> justin_smith, it's only a zero cost abstraction if the optimizer can inline it
<novaskell> it's zero cost iff it's a guarantee that in all cases foo(a, b) is equivalent to a+b by the language specification
<marler8997> so yes, you're right
<marler8997> novaskell, I agree, but that's not what is meant by "Zero Cost Abstraction"
<marler8997> zero cost != Zero Cost Abstraction
<justin_smith> marler8997: you are you attempting to describe an abstraction, or a specific usage of that abstraction, when you apply the adjective "zero cost"
<earnestly> Nothing is zero cost
<marler8997> at the moment I'm trying to get companion_cube to clarify what he means by saying that classes are zero cost abstractions
<marler8997> zero cost abstractions of what?
<marler8997> he says C struct, but I don't know what he means by that
<earnestly> marler8997: He said, vtables
<fengb> Should we distinguish between zero cost and Zero Cost™? 🤔
<ikskuh> marler8997: classes in C++ are a zero-cost abstraction over coding the object orientation by hand
<ikskuh> we also need the cost for Zero-Plushies!
<ikskuh> Nypsie[m]? :D
<marler8997> of vtables...I suppose you could say it that way
<ikskuh> not only that
<marler8997> sure I'll agree with that
<ikskuh> vtables, constructor, destructor, assignment operators, …
<ikskuh> operator overloading is also a zero-cost abstraction
<marler8997> ikskuh, which zero cost are you using?
<marler8997> your defintion?
<marler8997> cause if so, then operator overloading is only zero cost if the optimizer inlines it correct?
<ikskuh> no, that would be the function call thing again
<companion_cube> https://godbolt.org/z/anvT3T @ marler8997
<earnestly> Nothing is zero cost
<companion_cube> compare the C-style stuff with the lower stuff
<ikskuh> "a + b" is a simplification/abstraction of "add(a, b)"
<companion_cube> (pardon my atrocious C++)
<marler8997> earnestly, depends what you mean by zero cost
<earnestly> marler8997: CPU cycles
<earnestly> That's the definition you keep bouncing around with
<earnestly> You think zero cost is about optimisation
<marler8997> does that include compiler CPU cycles, or just the generated program?
<earnestly> lol
<earnestly> marler8997: Give yourself a few minutes
<companion_cube> https://godbolt.org/z/PxYjha actually
<companion_cube> god C++ is terrible
<marler8997> earnestly I'm confused
<companion_cube> 0 cost means runtime
<earnestly> 'nothing' is zero cost, nothing is zero cost. Abstractions that are 'zero cost' are defined to be abstractions that consolidate to the same code you would have otherwise written without the abstraction
<companion_cube> not the time needed to compile
<earnestly> Where the abstraction is there to abstract concerns about the machinary away from you
<earnestly> (There's also the mathematical definition which is far deeper)
<marler8997> earnestly, yeah that sounds right
<earnestly> We don't write with literal 1s and 0s like Von Neumann would have wanted
<dominikh> 1s and 0s are just an abstraction for pushing electrons around
<earnestly> It's turtles all the way down
<marler8997> earnestly, I agree with what you're saying, but am not sure what point you were trying to make?
<companion_cube> (ahahah I don't know how to use `new` this way 😢)
<earnestly> marler8997: You keep confusing abstractions with runtime performance and optimisation
<marler8997> I don't see how what you said excludes optimizations from being used to implement zero cost abstractions
<earnestly> It doesn't exclude them, but they're not related
<marler8997> ok...can you explain that one more?
<marler8997> maybe I'm missing something?
<marler8997> so optimizations *can* be used to implement zero cost abstrations, but zero cost abtractions aren't related to optimizations?
<marler8997> that one confuzzles me
<companion_cube> optimizations can help your program be fast
<companion_cube> but you can't rely on them
<justin_smith> marler8997: in order to say *the abstraction* is zero cost, it needs to be generally zero cost, not merely in special cases as an optimizer would find
<marler8997> companion_cube, reliability is irrelevant here
<companion_cube> no it's not
<marler8997> what's under question is if Zero Cost Abstraction is related to optimization
<justin_smith> so we can't say "function call is zero cost", we have well known use cases for functions that can't ever be inlined
<marler8997> earnestly claims they are unrelated
<dominikh> which is why reliability is not irrelevant...
<dominikh> companion_cube: why'd you come back
<companion_cube> sorry :p
<companion_cube> I wrote some terrible C++ on godbolt, felt the world needed to cry a bit more
<dominikh> I cry plenty as is tyvm :P
<marler8997> again, please be careful with 'zero cost'....in this case zero cost means that the generated code is the same as it would have been without the abstraction
<earnestly> marler8997: I never said unrelated. A venn diagram would include both. The point isn't about optimisation, although it's often one of the goals
<marler8997> earnestly, you said " It doesn't exclude them, but they're not related"
<marler8997> you said that like 15 lines up
<earnestly> marler8997: Sorry, I thought you'd understand how they can cross over
<marler8997> how what can cross over? huh?
<earnestly> marler8997: Do you know what a venn diagram is?
<marler8997> yes
<earnestly> Okay
<marler8997> I have a bachelor's in general mathematics
<marler8997> Here are my definitions. You have code A and code B. They do the same thing but A is an "abstraction" of B. For example, B could be "a+b" and A could be "sum(a,b)". With these defintions, A is said to be a "Zero Cost Abstrcation" if it generates the same binary code as B.
<earnestly> That'll do
<marler8997> Note that this depends on the optimizer, the compiler, alot of things
<companion_cube> marler8997: just add "for all compilers" and we agree
<marler8997> companion_cube, I agree that's a good goal, but I'm just saying that's not what C++ people mean. C++ people are ok to call an abstraction 0 cost if it only is so with the help of the optimizer
<TheLemonMan> so... you've been bickering for the last three hours or so over the definition of zero-cost?
<companion_cube> yep
<marler8997> Andrew himself has said that's also not what he means by zero cost
<marler8997> He claims the allocator interface can be zero cost with the help of the optimizer
<fengb> zero™ cost™ abstraction™
<fengb> One hour per term
<earnestly> marler8997: Sounds like confusing representation and optimisation again :P
<TheLemonMan> is it zero™ cost™ abstraction™ or 0™-cost™ abstraction™
<marler8997> earnestly, representation?
<earnestly> marler8997: You literally provided one in your definition, a+b and sum(a,b) in terms of the code it generates
<TheLemonMan> do you feel represented or not? should the compiler be more inclusive?
<TheLemonMan> I, for one, would like more marler8997 in the compiler
<dominikh> I think it's been plenty inclusive and understanding
<marler8997> omg
<marler8997> you're too much TheLemonMan
<marler8997> earnestly I don't see what you're saying... :(
<earnestly> Too much inclusion and it melts away to nothing. The ultimate zero cost
<marler8997> For anyone wanting to know what C++ means by Zero Cost Abstraction, watch "There are no Zero-cost Abstractions" here: https://www.youtube.com/watch?v=rHIkrotSwcc
<earnestly> This is more about "all abstractions are leaky"
<companion_cube> like, one talk in 2019?
<companion_cube> surely the term was used before :p
<dominikh> 2019 was the year of the zero cost abstraction
<dominikh> not the year of linux on the desktop, unfortunately
<TheLemonMan> 2021 is the year on Hurd on...something, that's what I heard
<dominikh> Hurd on life support?
<fengb> Hurd has life?
<TheLemonMan> there are ~20 days of 2020 still, there's time to gain momentum
<TheLemonMan> and then *boom* we'll al be using gnu slash Hurd
<dominikh> can't wait
<dominikh> would be the perfect end to 2020
<fengb> What if I use Busybox/Hurd?
<novaskell> Guix seems to be pushing work into it lately
<earnestly> Maybe hurd will switch to l4 architecture
<TheLemonMan> I bet its development could pick up some speed if only they introduced some kind of blockchain into it
<TheLemonMan> mine bitcoin while your system is booting, that's the future I want for my kids
<dominikh> you hate children, too?
<TheLemonMan> you gotta suffer to know the real happiness, no?
<novaskell> why not mine bitcoin before you boot? Have it at the minix layer
<g-w1> If I have a comptime u31 and it is at 30 and I do +%= 3 it should be at 2, right?
<andrewrk> marler8997, fwiw I was purposefully avoiding trying to define "zero cost abstractions" in our discussion and the only claim I made was that specifically for the Allocator interface, I thought there could exist an additional LLVM pass that would make it be able to devirtualize the function pointers
<g-w1> never mind, my logic is off again :)
<TheLemonMan> the apple M1 includes a separate core for all your bitcoin mining needs
<TheLemonMan> now sketchy websites can do their thing without interfering with your browsing
<marler8997> andrewrk, right, but that contradicts ikskuh's claim that zero cost abstraction has nothing to do with optimization
<TheLemonMan> g-w1, you don't really like base-2 heh
<ikskuh> marler8997: zig vtables have actual cost
<marler8997> currently yes
<marler8997> andrewrk is saying they may not in some cases in the future
<andrewrk> requiring an optimization pass to improve the performance of something is a cost
<earnestly> marler8997: (The code that your compiler generates from either sum(a,b) or a+b does not need to be optimal, reliable, constant-time, etc. They only have to be the same)
<marler8997> right, I was talking about runtime cost
<andrewrk> that also wouldn't run in debug builds
<marler8997> I'm always talking about runtime cost when saying "zero cost abstraction", because well, the abstraction itself will by definition have a different comptime cost
<earnestly> Next, negative cost abstractions
<dominikh> how about quantum abstractions?
<ikskuh> comptime is a negative runtime cost abstraction :D
<earnestly> yes lol
<ikskuh> we write code that will not be executed at runtime :D
<dominikh> ikskuh: larger executables have a runtime cost :P
<dominikh> y'all comptime so much code into my binaries
<marler8997> ikskuh, are you still holding to your claim that zero cost abstraction is orthogonal to optimization?
<ikskuh> marler8997: yes
<earnestly> (Obviously no, but I like absurd examples to show it's not about opimisation)
<companion_cube> earnestly: you laugh, but sometimes a `for x in arr { … }` might be faster than the C-like loop in some languages (like rust)
<companion_cube> because it elides bound checks better
<earnestly> companion_cube: Oh yeah
<companion_cube> I know it's tongue in cheek :)
<companion_cube> but it still kind of makes sense
<earnestly> companion_cube: It was an absurd example to demonstrate how it's logically not about optimisation
<ikskuh> andrewrk: do you want to include function deduplication in the language spec eventually?
<ikskuh> would be nice to be able to rely on that
<earnestly> I also mentioned reliability and constant-time execution to hopefully shift them away from optimisations too
<marler8997> andrewrk, do you agree with ikskuh on that claim?
<earnestly> E.g. Progress Sensitive Security for SPARK: https://0x0.st/zJBJ.pdf
<andrewrk> I don't think you two even disagree with each other about reality, you just are using different definitions of words
<earnestly> Yes
<marler8997> yeah that's true I suppose
<fengb> I disagree with reality
<marler8997> but he was saying C++ "Zero Cost Abstraction" is excludes optimization
<marler8997> TM
<marler8997> I can't make the TradeMark symbol like fengb :(
<andrewrk> I'm pretty sure when C++ people say "Zero Cost Abstraction" they are talking about runtime behavior after -O2
<marler8997> THANK YOU!!
<marler8997> ikskuh, do you agree so we can move on from this topic?
<ikskuh> haha
<dominikh> you could've dropped the topic hours ago
<jmiven> :-D
<ikskuh> we can also move on without agreeing :D
<fengb> T™ M™
<fengb> You could copy/paste my text >_>
<ikskuh> fengb © T™M™
<marler8997> some of it was productive
<marler8997> definitely not all of it
<ikskuh> we can agree on that for sure :D
<marler8997> Zero Cost Abstraction™
<earnestly> This is fundamentally why the prescriptivist position makes no sense as its products are always vague and meaningless, lol
* andrewrk scrolls up a bit
<ikskuh> also: get yourself a compose key!
<ikskuh> andrewrk: DONT!
<andrewrk> my goodness how long have you two been going at it? xD
<ikskuh> there's only chaos and madness
<marler8997> ikskuh, well what's you'r stance on C++ here then?
<marler8997> like 3 hours apparently!!!
<earnestly> andrewrk: Something something about pigs and mudd
<andrewrk> hey be nice
<ikskuh> i still stand my word and disagree with the widespread definition of -O2
<earnestly> andrewrk: You don't know the expression?
<ikskuh> <andrewrk> hey be nice
<ikskuh> he has a point, though :D
<ikskuh> i love discussion such stuff, even if it is totally unnecessary
<ikskuh> 🙈
<earnestly> That's exactly what the phrase is about
<jaredmm> Let's debate for three hours about the intended meaning of the phrase.
<earnestly> Bike sheds aren't going to paint themselves
<ikskuh> kek
<ikskuh> hey, i have to do something 3.5h in the train, okaaaay?
kristoff_it has joined #zig
<ikskuh> heya kristoff_it
<ikskuh> you missed the fun part!
<marler8997> ikskuh, I'm picturing you barging into CppCon and when the speaker says their function is a zero cost abstraction you saying "That's not what you mean by Zero Cost Abstraction"....
<ikskuh> :D
<ikskuh> \o/
<ikskuh> chaos!
<marler8997> someon is wrong on the internet and I can't abide!!!
<TheLemonMan> 'd just like to interject for a moment. What you’re referring to as Zero Cost Abstraction, is in fact, bullshit
<ikskuh> no. please don't start that again!
<TheLemonMan> let's save this for the next train ride
<marler8997> another opponent steps into the ring
<marler8997> ok TheLemonMan, I wanna hear your take on it
<ikskuh> TheLemonMan: monday morning, 10:30 :D
<andrewrk> take it to /r/programmingcirclejerk/
<TheLemonMan> marler8997, I was mocking Stallman's famous phrase :(
<TheLemonMan> you're old enough to get the reference!
<marler8997> jesus Stallman
<marler8997> don't being that madman into this
<ikskuh> andrewrk, some real zig talk for a moment: how do you think about allowing .decls for @Type(.Struct|.Union|…) ? *grin*
<marler8997> but seriously no I don't know the reference :( Are you sure 32 is old enough to know what? How old are you?
<TheLemonMan> 14/F/Cali
<TheLemonMan> wanna sext?
<marler8997> omg lol!!!!
<marler8997> TheLemonMan = TheTrollMan
<andrewrk> ikskuh, not sure what you're proposing
<andrewrk> the decls are exposed with @typeInfo
<TheLemonMan> marler8997, I take my shitposting seriously
<marler8997> ikskuh, your'e saying not passing TypeInfo but the sub union types of TypeInfo?
<ikskuh> andrewrk: when reifying a TypeInfo into a type, alloc .decls to be populated as well
<ikskuh> right now it's forced to be empty
<andrewrk> ahh right, the opposite of @typeInfo
<andrewrk> isn't there an open issue for that?
<ikskuh> yes, there is
<ikskuh> i was just being curious as i think we haven#t had an official statement on that
<earnestly> ikskuh: (Thank you for using 'reifying' instead of 'concretising' *cringe*)
<ikskuh> earnestly: that was the original propsal for @Type
<ikskuh> issue is this one
<ikskuh> i think the "go style struct embedding" would also be possible to be implemented by allowing .decls to be populated *thinking*
<marler8997> ikskuh, have you asked yourself this question? "Can the language do without this feature"?
<ikskuh> yes, i did
<ikskuh> and i found out that we can close a lot of other feature requests with it :D
xackus has quit [Quit: Leaving]
<dominikh> but do we need _those_ features
<marler8997> oh? will read
<ikskuh> primary goal for me would be closing the interface topic
<ikskuh> as it would possible to conveniently implement any interface abstraction with it
<ikskuh> or inheritance or whatever
<ikskuh> for me, it's the "i can live without it, but it would enable a lot of requested features to be created as a library instead of a language feature" thingy
xackus has joined #zig
<ikskuh> but i got to go now, train is arriving in a minute
<ikskuh> have to search the logs tomorrow for an answer
<ikskuh> oh, train is 10 minutes late :D
<ikskuh> another thing that would be possible is to create RPC/API bindings based on some comptime definition
<marler8997> so this features allows you to implement this Interface function that takes a struct and returns another struct?
<ikskuh> yep
<ikskuh> and the other struct is a reference to the first one
<ikskuh> exposing a interface
<ikskuh> like Reader for example
<ikskuh> or Allocator
<ikskuh> interface.zig by alex nask is already doing a similar thing, but uses a wrapper call based on a function name
wootehfoot has quit [Quit: Leaving]
<ikskuh> X.call("foo", .{}) instead of of X.foo()
<novaskell> derive scripts!
<dominikh> I do like the Allocator example
nycex has quit [Remote host closed the connection]
<marler8997> with status quo, could you implement this by creating a function that implements Allocator.get?
nycex has joined #zig
<ikskuh> yes, but not with with interface functions
<marler8997> oh wait, self
<marler8997> fieldParentPtr
<ikskuh> so you could do
<ikskuh> x = Allocator.get()
<ikskuh> but not
joaoh82 has joined #zig
<ikskuh> x.alloc(u8, 10)
<marler8997> I'm not seeing what makes this difficult to implement today (although with some slight modifications) I can try
<ikskuh> go on :D
<ikskuh> you will end with either x.call("alloc", .{u8,10}) or something completly different
<ikskuh> but you cannot create that exact API
<marler8997> well my idea is different
<ikskuh> it's not possible (to my knowledge)
<marler8997> so, Allocator remains the same
<marler8997> you just need a way to create these wrappers that also handle fieldParentPtr
<ikskuh> that's not the problem
<ikskuh> the problem is exporting the functions
<ikskuh> as callable symbols
<ikskuh> but i got to go now, train is arriving finally \o/
<ikskuh> but please send me your results as a PM here on in discord
<marler8997> ok
<fengb> Send him more zero cost abstractions
xackus has quit [Quit: Leaving]
xackus has joined #zig
<earnestly> zero context abstractions
<vesim> Hi, is there a way to iterate over struct fields at runtime?
<vesim> https://gist.github.com/vesim987/e91f4945ab989b3aeeb32fd30e57eb04 i have something like that but it is fialing to compile
<vesim> failing*
<marler8997> ikskuh, here's what I was saying: https://gist.github.com/marler8997/e61ec001df07b2bb38db44ffe91cecc2
<vesim> becaus eit is trying to execut e htat std.debug.print at compile-time
<novaskell> `inline for`?
joaoh82 has quit [Remote host closed the connection]
<vesim> novaskell: doesn't help :(
<novaskell> ah, you're returning type from create which requires comptime
WilhelmVonWeiner has quit [Read error: Connection reset by peer]
bgiannan has quit [Read error: Connection reset by peer]
<novaskell> in the return signature, change from type to Self then inline for(std.meta.fields(T)) |field| std.log.debug("{}", .{field.name}); will work
ifreund has quit [Ping timeout: 264 seconds]
TheLemonMan has quit [Quit: "It's now safe to turn off your computer."]
<vesim> novaskell: thanks
<vesim> so, another question, is there any way to generate functions at compile-time? I want to generate getters and setters for fields in the underalying struct.
<marler8997> vesim, code generation
<novaskell> you can use comptime tags to select implementations
<novaskell> vesim: see register, find, remove in https://0x0.st/i7lC.zig
gpanders has quit [Quit: ZNC - https://znc.in]
ifreund has joined #zig
tdeo has quit [Read error: Connection reset by peer]
tdeo has joined #zig
layneson has joined #zig
<vesim> novaskell: damn
marnix has quit [Ping timeout: 272 seconds]
ericonr has left #zig ["WeeChat 3.0"]
marnix has joined #zig
Techcable has quit [Quit: ZNC - http://znc.in]
Techcable has joined #zig
marnix has quit [Ping timeout: 240 seconds]
layneson has quit [Ping timeout: 240 seconds]
hlolli__ has joined #zig
hlolli_ has quit [Ping timeout: 272 seconds]
sord937 has quit [Quit: sord937]
wilsonk has joined #zig
Akuli has quit [Quit: Leaving]
<dch> what happens to the zig master tarballs over time? like the nice https://ziglang.org/builds/zig-0.7.0+d4c167f3c.tar.xz one that dropped today?
<dch> do they stay around? because its an S3 bucket, I can't browse the older versions
<dch> the index.json version of the site leads me to suspect the older ones aren't kept
<andrewrk> dch, I manually delete them after they get about 2 months old
<andrewrk> to save s3 costs
<dch> aah cool, so I will stash mine, and update regularly :-)
* dch sleeps with the fishes
donniewest has quit [Quit: WeeChat 3.0]
mokafolio has quit [Quit: Bye Bye!]
mokafolio has joined #zig
xackus has quit [Ping timeout: 240 seconds]
PC980177 has quit [Ping timeout: 272 seconds]
<marler8997> dch checkout zigup if you want a tool to manage them