<ikskuh>
this isn't really … viable for the target platform :D
<pixelherodev>
Huh
<pixelherodev>
Maybe implement a GC / runtime that runs the generated native code?
<ikskuh>
but i can do a semantic reduction, remove all types and only allow a single type: "u16"
<pixelherodev>
I joked about writing a kernel-level full-memory GC a while back
<pixelherodev>
For SPU II, that might actually be feasible
<ikskuh>
where 0[x] is memory access :D
<ikskuh>
and x[y] is pointer offsetting
<pixelherodev>
There a channel for this?
<ikskuh>
for what? i have created #lola-lang here :D
<ikskuh>
if you mean this
<pixelherodev>
Either there or #spu-mark-ii is probably better for this
<ikskuh>
yeah
<ikskuh>
if someone wants to follow: join us!
tsujp has joined #zig
ur5us has joined #zig
lemmi has quit [Ping timeout: 240 seconds]
JohnB99 has joined #zig
JohnB99 has left #zig [#zig]
JohnB99 has joined #zig
JohnB99 has quit [Remote host closed the connection]
lemmi has joined #zig
JohnB99 has joined #zig
washbear is now known as racoon
ur5us has quit [Ping timeout: 244 seconds]
JohnB99 has quit [Quit: leaving]
JohnB99 has joined #zig
<ifreund>
what's the overhead of using the async with the std event loop in single threaded mode?
<ifreund>
is all the synchronization code disabled/optimized away?
SimonN has quit [Remote host closed the connection]
SimonNa has joined #zig
pangey has quit [Ping timeout: 256 seconds]
pangey has joined #zig
cren has joined #zig
<cren>
hey all!
<cren>
If I have a struct that represents a connection to a server...
<cren>
via a socket
<cren>
what should be in the struct?
<cren>
Do I even need the struct?
<cren>
I suppose I could put the hostname of the server and things
<ifreund>
you should only put what you need
<ifreund>
which may be nothing more than the fd of the socket
<cren>
ifreund: yeah that seems right, thanks
<pixelherodev>
ifreund: yes
<pixelherodev>
with --single-threaded, there should be zero overhead to async IIRC
klltkr has joined #zig
klltkr has quit [Client Quit]
<ifreund>
nice
dddddd has quit [Ping timeout: 246 seconds]
dddddd has joined #zig
shpaghett is now known as ggVGc
xd1le has joined #zig
traviss has quit [Remote host closed the connection]
traviss has joined #zig
msingle has joined #zig
Akuli has joined #zig
marnix has quit [Read error: Connection reset by peer]
marnix has joined #zig
waleee-cl has joined #zig
klltkr has joined #zig
xackus has joined #zig
klltkr has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
klltkr has joined #zig
sawzall has quit [Read error: Connection reset by peer]
sawzall has joined #zig
xackus has quit [Ping timeout: 260 seconds]
drewr has joined #zig
cole-h has joined #zig
Piraty has quit [Quit: ---]
Piraty has joined #zig
st4ll1 has joined #zig
bgiannan has quit [Ping timeout: 244 seconds]
<KoljaKube>
If I have the following in C: `typedef struct foo { union { int a, int b }; } foo;`, does that basically mean that instances of foo have members a and b sharing the same space?
<KoljaKube>
Haven't encountered unnamed unions in this fashion before
<fengb>
Yep
<fengb>
We have to name every field, so this style doesn't work in Zig
<KoljaKube>
That would have been my next question, thanks :D
xackus has joined #zig
vegai has quit [Remote host closed the connection]
marnix has quit [Ping timeout: 264 seconds]
cren has quit [Quit: Swirc IRC client]
sawzall has quit [Read error: Connection reset by peer]
sawzall has joined #zig
Barabas has joined #zig
marnix has joined #zig
<Barabas>
Hello
<Barabas>
Can someone help me with a networking question I have.
<Barabas>
I basically did on socket a: bind, listen. Then on socket b: connect. And then on socket a: accept. (all in one simple test app) But it hangs forever on the accept. Even if I connect from a console to the port, it never returns from the accept call. (I'm on windows.)
nvmd has joined #zig
<DarkUranium>
alexnask[m], this might be of interest to you for Zig.
<DarkUranium>
I just talked to the guy who implemented the lexer & parser for TypeScript and Roslyn (the compilers in that video I mentioned!).
<DarkUranium>
We're not done yet, but I got a *LOT* of insight into LSP-related stuff.
marnix has quit [Ping timeout: 256 seconds]
<companion_cube>
when it comes to error-recovering parsers?
<DarkUranium>
Haven't gotten around to that yet.
<DarkUranium>
The conversation was about partial parsing, for situations where you have a language server, and want fast parses.
<DarkUranium>
E.g. if user inserts "foo" at position 500.
a_chou has joined #zig
marnix has joined #zig
klltkr has quit [Remote host closed the connection]
<companion_cube>
anyway, if your parser and typechecker are slow, you need complicated incremental techniques; if it's really fast, maybe you can just reparse the whole buffer
<DarkUranium>
For LSP stuff? I think reparsing gets too expensive *very* fast.
<DarkUranium>
It's not that uncommon to see 50k LOC for bigger projects either (also amalgamations, but those might be relatively worthless or even impossible depending on language)
<fengb>
It's 3.5x ir.cpp D:
<DarkUranium>
That's less than 135k, but still.
<DarkUranium>
My point is: This needs to handle a random insertion at position `3` gracefully. Without a full re-lex or reparse (or re-semantic-analysis)
<companion_cube>
question is, do you expect to open these in an editor and code on them :D
<companion_cube>
a full re-lex should still be super, super fast
<companion_cube>
checking is more expensive for sure
<companion_cube>
(lexing is linear and normally very machine friendly)
<DarkUranium>
companion_cube, that file was written manually.
<DarkUranium>
I asked.
<DarkUranium>
Which means that: yes, people have been doing exactly that.
klltkr has joined #zig
<companion_cube>
:DDDD over time, I guess
<companion_cube>
that's still kind of terrible, but well
<DarkUranium>
I've seen 50k LOC C files that were not amalgamations (sorry, I forget what software it was)
<companion_cube>
that's frightening, but indeed you need incrementality there
<Barabas>
fengb windows networking doesn't work in zig std basically. I was trying to fix it along the way, but eh... apparently I'm doing something wrong. I managed to rewrite my test with more plain api calls and it works, so I guess I messed something up somewhere.
<DarkUranium>
companion_cube, anyways, you also need incrementality in order to inform semantic analysis.
<DarkUranium>
You can't just do all of it on an entire project (possibly millions of LOC)
<DarkUranium>
But you can't do that incrementally if you don't know what syntax tree nodes are unchanged!
<Barabas>
Just emit a warning: "File too large to parse, please refactor your code" xD
<companion_cube>
DarkUranium: you could have a non incremental parser and an incremental checker
<companion_cube>
it's not incompatible
<DarkUranium>
companion_cube, but how will you know what parsed nodes are unchanged?
<DarkUranium>
(so that you know what you *don't* need to recheck)
<DarkUranium>
But to some extent, sure.
<companion_cube>
I'd say some form of hashconsing
<companion_cube>
(in any case, you want to reuse information from the previous version of the buffer; you can do that by either reparsing only part of the input, or by mapping what you parsed and didn't change to the previous version)
<companion_cube>
(just like, say, git does)
<DarkUranium>
They're the same thing, no?
<DarkUranium>
I mean, if you reparse only a part, then obviously you have to somehow map *the rest*.
<companion_cube>
one involves complicated parsing techinques, the other involves a hashmap :D
<pixelherodev>
I think that this discussion is missing the point to a large degree
<companion_cube>
possibly :D
<pixelherodev>
It doesn't really matter if incremental lexing / parsing is *necessary*
<pixelherodev>
The real question is, is there any reason *not* to support it?
<pixelherodev>
Even on small files, a minor speedup is still very nice
<Barabas>
It's complicated?
<pixelherodev>
I was about to say that, yeah :P
<DarkUranium>
^ that's a reason. Not reason enough IMO, but *a* reason.
<pixelherodev>
The *only* tradeoff is complexity
<DarkUranium>
The core issue is that you have 100ms to do lexing, parsing (of the whole file, because you can't really avoid that there), *and* semantic analysis.
<pixelherodev>
Complexity *can* be a good reason, but in this case I don't think it is
<DarkUranium>
*and* display the result to the user, in GUI.
<Barabas>
Could be a very good reason. The more complicated it is, the more bugs it will have and such.
<pixelherodev>
The complexity of incremental lexing isn't that bad
<pixelherodev>
Barabas: of course!
<pixelherodev>
But this *isn't* that complex
<fengb>
Some of us have been using zig-network in Windows. I dont' have a Windows box so I don't know for sure
<pixelherodev>
It's more compmlex than *not* doing it
<pixelherodev>
But it's still pretty simple
<Barabas>
fengb yeah I could take a look at that. For now I'm just playing around :)
<companion_cube>
DarkUranium: don't forget the LSP response in the 100ms :D
<DarkUranium>
companion_cube, ah, true.
<companion_cube>
including some jsonrpc, because… ugh
<DarkUranium>
Yeah :|
<companion_cube>
(why the fuck did they pick jsonrpc)
<DarkUranium>
That's why Coyote won't have LSP support ... but will have an API that does the same.
<Barabas>
I think I actually hit a compiler bug... but not 100% sure yet.
<companion_cube>
(to transport *buffers*)
<DarkUranium>
LSP support will exist, but as a separate/external project.
<DarkUranium>
(official, but external)
<DarkUranium>
And that'll be a wrapper. Basically a jsonrpc host that forwards calls, etc.
<DarkUranium>
(and deals with LSP-related handshake stuff, like its weird way of exiting)
<DarkUranium>
Barabas, the reason why I'm doing this this early is two-fold:
<pixelherodev>
Hey, *I'm* the coypiler guy! ;)
<DarkUranium>
lol
<DarkUranium>
1) It's *much* less of a problem if something as important as this is too fast, rather than too slow.
<pixelherodev>
That's why it's crashing now! ;)
<DarkUranium>
2) If it *is* too slow, you're fucked architecturally.