<simpson>
In the context of Capn Proto, what is "embargo"? A Monte community member mentioned it but doesn't have docs, and I can't find anything about it.
<simpson>
The context was wanting to synchronize some promise-pipelined actions beyond what E-order requires, but without roundtripping. I wasn't sure if it was a thing, and they mentioned that Capn does "embargoing".
<simpson>
Oh, okay. And, as usual, CapTP makes a bit more sense now. Excellent.
<simpson>
dwrensha: Thanks.
<kentonv>
simpson, FWIW, I don't think CapTP had embargoes. It used a more conservative model where it blocked new requests at the sender until it could prove pipelined requests had reached their destination.
<kentonv>
but I found that unsatisfying. :)
<simpson>
Dean apparently has a new challenge to E-order, where a roundtrip might be eliminated if only there were a slightly better delivery notification. I still don't understand the details and it's not clear whether it's worth fixing; both Kevin and I apparently feel that better-factored code wouldn't have the problem.
harish has joined #sandstorm
<kentonv>
embargoes are designed to avoid the round trip. I feel like you can't do much better.
harish has quit [Ping timeout: 248 seconds]
digitalcircuit has quit [Ping timeout: 260 seconds]
digitalcircuit has joined #sandstorm
n8a has quit [Ping timeout: 276 seconds]
n8a has joined #sandstorm
larjona has quit [Read error: Connection reset by peer]
larjona has joined #sandstorm
robbt has quit [Quit: Connection closed for inactivity]
AZero has joined #sandstorm
harish has joined #sandstorm
demonimin has joined #sandstorm
<hannes[m]>
printing an etherpad only prints text on the first page, can someone replicate this?
larjona has quit [Ping timeout: 248 seconds]
pie__ has joined #sandstorm
pie_ has quit [Ping timeout: 240 seconds]
xet7 has quit [Ping timeout: 260 seconds]
samba_ has joined #sandstorm
xet7 has joined #sandstorm
xet7 has quit [Quit: Leaving]
au has joined #sandstorm
<TimMc>
hannes[m]: I can reproduce. 4 pages, only first page had text; all pages had the author-colors strip running down the left side.
samba_ has quit [Ping timeout: 255 seconds]
xet7 has joined #sandstorm
<hannes[m]>
same
<TimMc>
I wasn't able to find a non-Sandstorm etherpad with recent software to compare against, though.
<Zarutian_PI>
is it okay for comm systems to forward those messages but not deliver/dispatch on them even if embargoed?
Telesight has joined #sandstorm
samba_ has quit [Ping timeout: 248 seconds]
samba_ has joined #sandstorm
dwrensha has joined #sandstorm
Zarutian_PI has quit [Read error: Connection reset by peer]
simpson has quit [Ping timeout: 248 seconds]
simpson has joined #sandstorm
ocdtr_web has joined #sandstorm
<ocdtr_web>
kentonv: Holiday for you? :P
<kentonv>
yeah
<ocdtr_web>
Not for me. :( At the office. Practically alone, since I'm at a government building, and everyone else is off.
<kentonv>
why not for you?
<ocdtr_web>
I work at a private company, they have like six paid holidays a year. Why burn vacation time when I can work here in the quiet. :P
taktoa has quit [Remote host closed the connection]
<kentonv>
I'm so pleased with proxy.js being gone but I'm probably the only one who really cares...
xet7 has quit [Quit: Leaving]
ogres has joined #sandstorm
<ocdtr_web>
I got that general impression from the existence of a RIP post for it.
<dwrensha>
does this get us closer to being able to use stock node.js?
Zarutian_PI has joined #sandstorm
<kentonv>
dwrensha, you mean without the v8 patch? No -- in fact, Cloudflare Workers needs that same V8 patch. I honestly don't understand how anyone lives without it.
<kentonv>
well, ok, I take that back. CF Workers uses threads and Meteor uses fibers, both of which trigger the bad behavior.
<kentonv>
most node users don't use either
<kentonv>
but Meteor still monkey-patches Promise which means all our capnp promise-based code is spawning fibers all over the place
<kentonv>
so I'd rather keep the patch even though it's not needed as intensely as it used to be
<kentonv>
...... uh wow. I just noticed that our database storing stats from self-hosters who have opted in hasn't received an update since June. And has seemingly been using 100% CPU on Alpha ever since.