kentonv changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev
nicoo has quit [Remote host closed the connection]
nicoo has joined #sandstorm
<isd> kentonv: Thinking about the flow control stuff that was added recently. Would it not work to just have the call "return" when the call message is written to the socket, rather than querying the buffer size explicitly or trying to compute bbr ourselves? It seems like this would fairly straightforwardly piggy-back on top of the control flow the OS is already doing, what am I missing?
<isd> s/control flow/flow control/
<kentonv> isd, the problem with that is that if the remote application can't actually process the input at the connection's maximum bandwidth, it'll start buffering calls there.
<kentonv> the receiver could stop processing Cap'n Proto messages altogether in order to exert backpressure, but then other RPCs over the same connection are stalled
<kentonv> TBH I frequently ask myself the same question and have to re-remember why that doesn't work. It seems so simple... :)
<isd> I guess you would have to keep buffering on the receiver bounded, but tbh we probably want that anyway to avoid (intentional) DoS.
<isd> It seems like there's probably no way for the receiver to enforce fairness across objects, since it has to pull out call messages in-order, brefore it knows who they're for.
<kentonv> the RPC system does actually support setting a limit on the receiving side on "total call bytes in flight", and the Sandstorm supervisor uses it. But it's a blunt tool.
<isd> Perhaps you could implement prioritized queuing of messages on the sender side, with the scheduler favoring messages being sent to objects which have fewer pending calls (or fewer pending bytes worth of calls).
<isd> The pony runtime does something like this.
<kentonv> that might be worth doing separately, but we still can't rely entirely on the connection's own flow control
<isd> I guess I'm not really seeing how getting bbr (or the buffer size) actually helps you. Bounding the buffering on the receiver's end seems like it's neccessary for security, and if this results in an under-utilized link because the application can't process quickly enough... I'm not actually sure what problem you're trying to solve in that case?
<abliss> it sounds like the issue is multiplexing two streams, say one cpu-limited and one bandwidth-limited, on the same TCP connection.
<abliss> (presumably because either the number of receiving TCP ports is scarce (but then why not scale out the server?) or to avoid the start-up overhead for a new connection (but is flow control really a big deal for short-lived connections?))
<isd> I mean, we're talking about capnp, so having to share a single stream is a given.
<abliss> oh.... how come?
<kentonv> it's not necessarily a given, but in order to make good decisions about when _not_ to reuse the existing connection you'd probably need hints from the app
<kentonv> and connection management would add a ton of complexity of its own... especially in, like, the sandbox use case, where the only connection is a single unix socket to the supervisor, and everything proxies through there
<kentonv> sure you could add a mechanism by which the sandbox can request additional sockets or whatever
<kentonv> equality would become a big pain too. How can you tell if two capabilities you received on separate connections point to the same object?
<kentonv> and path-shortening
<kentonv> yeah generally there's a lot of aspects of Cap'n Proto's design and common usage that become a big pain if you try to introduce the ability to move certain capabilities to a new connection
<abliss> (how do you tell that on a single connection?)
<kentonv> if the same capability is sent twice over the same connection, it uses the same export table slot
<kentonv> think of it like if open()ing the same file twice returned the same file descriptor number (but with reference counting, so then you needed to close() it twice before it actually closed)
<abliss> i see, the connection is stateful, and synchronizing it across two tcp streams might be tough?
<isd> Yeah, that sounds like a nightmare.
<isd> (rpc.capnp is a good read if you want to understand the protocol at a deeper level)
<kentonv> right... Cap'n Proto is all about enabling stateful protocols in a sane way
<simpson> abliss: And one has to keep in mind that TCP itself is multiplexed on a single cable/channel at the hardware level, so that even if multiple connections solved one bottleneck, another physical bottleneck is right around the corner.
<simpson> It's irritating, but this problem is inherent to bandwidth over time.
<abliss> simpson: sure, good point. i was mostly musing about why TCP's built-in per-connection flow-control wasn't a good solution to capnp flow control
<simpson> Sure, makes sense.
frigginglorious has joined #sandstorm
frigginglorious has quit [Ping timeout: 265 seconds]
_whitelogger has joined #sandstorm
infogulch has quit [Read error: Connection reset by peer]
frigginglorious has joined #sandstorm
avffa has joined #sandstorm
<avffa> Hey all i'm trying to set up Sandstorm to a subdomain using an LE * cert, got just about everything going but I can't seem to hand the TLS connection to Sandstorm from Nginx
<avffa> getting a 502 error, was wondering if someone had experience with this/was able to help
<JacobWeisz[m]> If you want to share your Sandstorm config (sanitized for info you don't want public) we can look.
<avffa> SERVER_USER=sandstorm
<avffa> PORT=6080
<avffa> MONGO_PORT=6081
<avffa> BIND_IP=127.0.0.1
<avffa> BASE_URL=https://sandstorm.domain.tld
<avffa> WILDCARD_HOST=*sandstorm.domain.tld
<avffa> UPDATE_CHANNEL=dev
<avffa> ALLOW_DEV_ACCOUNTS=false
<avffa> SMTP_LISTEN_PORT=30025
<avffa> SANDCATS_BASE_DOMAIN=sandcats.io
<avffa> HTTPS_PORT=6443
<avffa> I'll post anything more to pastebin to make it easier, but that's what I have for sandstorm, I was originally going to run on non-standard ports for the Nginx reverse proxy but decided against it so it's proxypassing 443 -> 127.0.0.1:6443
<JacobWeisz[m]> So if you're putting Sandstorm behind a reverse proxy, you'd be removing HTTPS_PORT and SANDCATS_BASE_DOMAIN. I don't think it'll work with them in place.
<JacobWeisz[m]> And you'd hand it to the HTTP port on localhost.
<avffa> gotcha
<avffa> that makes a bit of sense
<JacobWeisz[m]> Built-in support for HTTPS cert renewals with LE is being worked on, but I can't give you a date when that'll be available.
<avffa> understandable. For Wildcards it barely works with certbot as is. Especially if you have an unsupported reg
<avffa> but at least LE supports wildcard now
<nullbpm> Yeah, I'm super happy I found a certbot plugin for my domain registrar
<JacobWeisz[m]> Yeah, there's a bunch of DNS provider specific stuff, Kenton is looking at a library to possibly manage some of that.
<avffa> also that suggestion worked perfectly, thanks much
frigginglorious1 has joined #sandstorm
frigginglorious has quit [Ping timeout: 256 seconds]
frigginglorious1 is now known as frigginglorious
<JacobWeisz[m]> No prob :)
<JacobWeisz[m]> As a note, our Nginx sample config is super out of date, we have an open PR to update it, but it hasn't been adequately reviewed, I think.
<JacobWeisz[m]> So if you used our sample you may wish to look at some of the TLS config updates in the PR. :)
avffa has quit [Quit: Connection closed]
frigginglorious has quit [Read error: Connection reset by peer]
frigginglorious has joined #sandstorm
frigginglorious has quit [Ping timeout: 240 seconds]
frigginglorious has joined #sandstorm
xet7 has quit [Quit: Leaving]