kentonv changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev
_whitelogger has joined #sandstorm
<JacobWeisz[m]>
Some of the docs rewrite stuff that scares me is just straight architectural... Sandcats has both an HTTPS page and a Dynamic DNS page. Both are potentially useful and valuable info, but realistically a lot of that should be simplified down into the general HTTPS stuff...
<JacobWeisz[m]>
Sandcats now is only really distinct in its providing dynamic DNS now.
<JacobWeisz[m]>
I kinda want to eliminate the Sandcats HTTPS page entirely under that concept.
kentonv has quit [Quit: Leaving]
frigginglorious has quit [Ping timeout: 256 seconds]
_whitelogger has joined #sandstorm
_whitelogger has joined #sandstorm
frigginglorious has joined #sandstorm
<isd>
So I started fussing with porting the WIP filesystem LD_PRELOAD library I'd started over to Rust. I think we could sensibly include logic to handle the pty stuff in the same library.
frigginglorious has quit [Remote host closed the connection]
frigginglorious has joined #sandstorm
frigginglorious has quit [Ping timeout: 265 seconds]
frigginglorious has joined #sandstorm
frigginglorious has quit [Ping timeout: 260 seconds]
frigginglorious has joined #sandstorm
wings has joined #sandstorm
frigginglorious has quit [Ping timeout: 265 seconds]
<vertigo_38>
Hi again! I'm just inviting friends to test my sandstorm installation -- which brings me to a point, which is completely different from a 'normal UI' (subject to be proven wrong ;)). What I'd like to do would be 2 things. The first is some kind of space quota (I see that the concept of sandstorm alone drastically reduces storage needs, but I at least want to control, how much my people can put into davros eg.). The second is,
<vertigo_38>
authentication tries are being logged -- neither in nginx, nor sandstorm itself. Authentication itself is done via a docker-ldap-container on the sandstorm machine itself and works nicely in any other regard.
<vertigo_38>
(Sandstorm login reacts to wrong credentials with 'wrong user/password' at the login interface -- that's what I'd like to track with fail2ban)
<vertigo_38>
(more exactly -- if the tried username does not exist, I get 'User not found in LDAP', if I enter a proper username but the wrong password, I get 'invalid credentials')
<xet7>
vertigo 38: Sandstorm shows space usage. If you limit for example Wekan grain space usage, so Wekan disk space gets full, it does corrupt Wekan MongoDB database.
<xet7>
vertigo 38: You have some disk space monitoring script that checks there is enough total free disk space
<xet7>
vertigo 38: So your docker-ldap-container does not log login attempts? You could check with "docker logs CONTAINER-ID"
<xet7>
vertigo 38: And check your docker-ldap-container software docs how to add logging
<xet7>
vertigo 38: Also check /var/log/syslog
<xet7>
vertigo 38: for davros space usage, probably that would be davros feature request in davros issues
<xet7>
vertigo 38: usually in davros there is static files, not databases that can corrupt
<xet7>
maybe some of those is most appropriate for this
<xet7>
vertigo 38: You can read those issues, and ask there about status
<xet7>
on one issue that is most appropriate for this
<vertigo_38>
xet7: thank you for the links & sorry for not digging enough!
<xet7>
No problem, thanks for asking :D
<xet7>
It's not required to dig so much
<xet7>
Anyone is welcome to as anything
<xet7>
ask
<vertigo_38>
I think I'm coming off the idea running ldap as docker container -- if I tail -f its logs I see with which username somebody wants to log in, but I cannot link this directly to nginx's access log. I think at least in this regard my setup is quirked ;)
<JacobWeisz[m]>
vertigo_38 displaying storage usage of users to the admin is a pretty straightforward feature, one we've talked about a lot. I think it probably isn't far off.
<vertigo_38>
JacobWeisz[m]: thanks for the outlook!
<JacobWeisz[m]>
I think Sandstorm might try to enforce quotas set in LDAP, but I don't have experience with that.
<JacobWeisz[m]>
Sandstorm Oasis definitely had quota management but Blackrock uses a different storage backend than normal Sandstorm, so the functionality isn't 100% identical.
<vertigo_38>
Quota is not that dramatic for now, if I tell my friends not to upload their movies into davros, I trust them not to do so ;), and if so, it is so. For now I'd mainly like to get fail2ban watch over the login interface to sleep better... In case we implement sandstorm in our lab, quota would be really nice (as it's most likely me watching over storage ;)).
<JacobWeisz[m]>
Yeah, I have some users on my Sandstorm instance and I just feel like I don't know if they're using a bunch of storage stuff or not.
<JacobWeisz[m]>
Nobody uses it as a major resource but me, but I dislike the current invisibility of that info.
<JacobWeisz[m]>
Generally, Sandstorm's position has been that it doesn't do logins. Especially if you're doing LDAP or the like, it's assumed your login provider should be managing account lockout or logging or whatever.
<JacobWeisz[m]>
Email login assumes you're well-managing the email accounts, someone who can access email can access Sandstorm. Google and GitHub both have features to show login failures and logged in sessions and such. LDAP and SAML generally are used with platforms that can do lockouts and logging and such.
<JacobWeisz[m]>
Sandstorm avoids writing a lot of account security features by only permitting login strategies that are capable of that externally. It's why it's never offered a straight username/password option.
<vertigo_38>
I like that approach, actually! I'm just hunting the connection where sandstorm transmits the login-info to my LDAP on the sandstorm localhost.
<vertigo_38>
And wonder whether I can there find the attempting IP together with which username was tried...
wings has quit [Ping timeout: 260 seconds]
<vertigo_38>
A resource-wise question -- do you think it's feasable thinking sandstorm as user interface for <= 100 simultanous users (students, teachers, staff)? We currently think about setting up a FOSS VM server in our institute and discuss designs...
<abliss>
That's a great question and I would also love to know the answer. I wonder if kentonv has any insights he can share about the largest installs, what kind of hardware they used, and how well they performed?
<abliss>
My uninformed guess is that 100 simultaneous users would probably make you the largest install ever (or possibly 2nd largest behind alpha.sandstorm.io)
<vertigo_38>
abliss: you scare me
<vertigo_38>
:)
<kentonv>
alpha.sandstorm.io isn't actually used by many people. oet.sandcats.io might be the biggest self-hosted instance
<kentonv>
I believe Oasis got up to 300 concurrent users at times.
<kentonv>
of course, Oasis was using a "more scalable" architecture, but TBH it would have been fine on a beefy machine
<kentonv>
get a VM with like 128GB of RAM and you're probably good
<vertigo_38>
Thanks, that sounds like sane numbers. I doubt that we will have the 100 concurrent users very often, but that's would we have to be able to offer. I could imagine 70 students using jupyternotebooks during lessons to be the max load.
<JacobWeisz[m]>
It might not hurt to clock how much RAM Jupyter is using in a grain and do some napkin math.
<JacobWeisz[m]>
Obviously RAM usage of Sandstorm is heavily dependent on what apps people are running.
<vertigo_38>
How can I clock a single grain's RAM usage?
<abliss>
that's another great question. and the napkin math may get tricky because some of a grain's RSS should be shared with other grains of the same app.
<abliss>
Simplest might be to log into the server and run "free", then start the grain and run "free" again, then start a second copy and run "free" again. Maybe repeat up to n=5, then graph and try to extrapolate?
<vertigo_38>
I just tried to 'crash' the system with some python code I found on stackoverflow ~['A'*1024 for _ in xrange(0, 1024*1024*1024)]~ from within the jupyter notebook, but did not succeed... As soon as memory limit (including swap) on the vm is reached the 'python-kernel' dies. If that's the result of an overload I can live with it. Still it feels a little on the edge
<kentonv>
yes, when the system runs out of memory, the kernel chooses something to kill. If there's one process eating all the memory, it's probably going to kill that.
<abliss>
might also be fun to try to open an infinite number of filehandles to see what happens on the machine.
<kentonv>
you can probably DoS the machine if you try. Sandstorm doesn't do a whole lot to prevent this.
<isd>
...we should probably at least be creating cgroups for grains
<vertigo_38>
that forkbomb seems to work nicely. rebooting the machine now through my vps panel
<isd>
Yeah, cgroups might help with that, as it would allow the kernel to understand that the whole grain should be treated as a unit.
<kentonv>
as of Linux 4.6 we finally have cgroup namespaces so we may actually be able to use cgroups without being root now?
vertigo_38 has quit [Ping timeout: 256 seconds]
vertigo_38 has joined #sandstorm
frigginglorious has joined #sandstorm
<abliss>
I'm continuing to read up on pseudoterminals. I still don't see why they are handled by the kernel. it seems like it's just a fancy bidirectional socket with a bunch of special ioctls. (handling of ctrl-z and ctrl-c are often mentioned as special but i don't see why). It seems like it would be possible to implement one using cuse (character device in userspace, similar to fuse), but that probably has its own security
<abliss>
issues.
<abliss>
doing a glibc hack and/or a LD_PRELOAD seems straightforward but a bit unpleasant. I suppose you'd create a socketpair to act as the master and slave devices, then intercept every ioctl() to check if it's one of your special FDs, and then you'll have to implement all the weird terminal ioctls yourself (either in-band, by wrapping each message that gets sent through the socket, or out-of-band by setting up a separate
<abliss>
control-plane socketpair)
<abliss>
oops wait, ioctl is a system call, not a glibc function. so custom glibc and ld_preload seem impossible, and you'd have to do some ptrace or BPF?
<isd>
I mean, most programs will call into the glibc wrapper, same as with open() and friends.
<kentonv>
most people use the glibc wrapper, so usually you can intercept it with LD_PRELOAD -- as long as the call isn't coming from glibc itself, and as long as the app isn't statically linked
<isd>
It might make sense to integrate this into the existing LD_PRELOAD thing that I started working on for the filesystem stuff. This way we can share a lot of the "shadow fd table" logic and such.
<abliss>
yes indeed
<abliss>
though i'm pretty curious why nobody seems to have tried this (or even considered it) before... though perhaps my google-fu is lacking; there are a bunch of adjacent concepts with overlapping keywords
<isd>
I mean, how common is it for ptys to not actually be available, yet still needed?
<isd>
I'm not totally surprised no one has done this given the likelyhood of needing it and the annoyance of building it.
<abliss>
given the history of security issues in the tty layer, and the trend towards containerization as a security boundary to allow shared compute resources, i would think that a google or amazon would want to look at moving ptys out of the kernel and into userspace?
<abliss>
actually that make me wonder what microsoft has done with ptys in the the linux subsystem for windows
<isd>
I mean, it's rare for server apps to even need them?
<abliss>
in the google cloud platform you can click a button in your browser and be insantly dropped into a disposable shell session on a small temporary machine. it seems to have a very capable pseudoterminal (e.g. emacs and screen work fine). so is that a "real" devpts running in a real kernel on a real machine somewhere? or is it in an entirely emulated kernel?
<kentonv>
I don't think Google trusts containers for security. It probably runs in gvisor.
<abliss>
though i guess, now that i think about it, kubernetes doesn't support ssh into containers. you have to do 'kubectl exec' to get a shell, and it seems to lack a lot of pty functions...
<isd>
Yeah, I haven't talked to anyone who seriously thinks it's a good idea to rely on docker for security isolation. At best it's an extra layer to punch through, defense in depth and all that.
<isd>
Even in its own namespace, the kernel's default attack surface is just way too big.
<abliss>
would gvisor work in a grain, or does it require cap_sys_admin to set up a gvisor sandbox? if the latter, perhaps gvisor contains a pty impl that we could steal...
<kentonv>
gvisor is a virtual machine engine. It probably doesn't even work inside other VMs.
<isd>
We could potentially just yank that whole-sale and wrap it in a capnp server.
<isd>
and just have the preload lib redirect calls to it.
<abliss>
yeah, i'm wondering which impl of line discipline and terminal codes will be easier to tease out of its parent project: the C code out of the linux kernel, or this go code out of gvisor
<abliss>
(looks like gvisor can actually attach using ptrace, as an alternative to KVM, so maybe it could work inside a grain?)
<isd>
I think we block ptrace?
<isd>
Ideally we'd find a way to do this that doesn't involve giving grains more attack surface.
<kentonv>
we definitely block ptrace, it has had all kinds of security issues :)
<abliss>
doesn't look like it
<abliss>
i seem to be able to run 'strace -f' inside a grain though? or maybe it only works in 'spk dev'
<isd>
IIRC we relax things a bit for dev mode.
<isd>
Yeah, we allow some ptrace stuff in dev mode. Have a look at supervisor.c++
<mokomull>
abliss: ctrl-Z and ctrl-C are "special" because the terminal end of the pipe puts a byte 0x03 or 0x1a in the pipe, but that byte doesn't come out the other end: a signal is sent instead. Likewise with, e.g., backspace, ctrl-U in cooked-mode.
<abliss>
yep, confirmed, it works in spk dev but not after pack/upload. is there an easy way to get a 'non-relaxed' sandbox to play around in?
<abliss>
kentonv: you mentioned seccomp-bpf in the original email thread. is that allowed inside prod grains? (it's hard for me to believe that seccomp-bpf could be less of a risk than ptrace, but i'm quite ignorant of both)
<isd>
We block that too. Indeed, the configuration end of seccomp-bpf has had some vulnerabilities in the past.
<kentonv>
we of course *use* seccomp to implement the sandbox, but yeah, we don't let grains use it
<isd>
Wouldn't it be nice if the kernel had sandboxing mechanisms that compose...
<abliss>
https://gvisor.dev/docs/user_guide/filesystem/ seems similar in spirit to your filesystem-over-capnp design. and gvisor also has some checkpoint-restore stuff which could be useful for the quick grain startup you mentioned on the last call.
<isd>
I'm sure we couldn't adapt the checkpoint-restore stuff from gvisor without loosening sandstorm's sandbox.
<isd>
...and I still think it's just not worth the extra complexity
<isd>
One thing I hit working on the fs bits of the LD_PRELOAD is that it's a bit hard to figure out what to set errno to in some cases.
<isd>
I wish capnp either had structured data in exceptions or some way to check in-band error codes without giving up pipelining.
<isd>
kentonv: ^
<isd>
How would you feel about adding a field to rpc.capnp's exception that could be use to attach non-Text data?
<kentonv>
TBH I'd feel better about extending pipelining to support conditional results in some way.
<isd>
Yeah, I kinda like that better too... It's just a lot less obvious to me what it would look like.
<kentonv>
oh but you're specifically trying to tunnel an errno code?
<isd>
Or just enough information to reconstruct one, yeah.
<kentonv>
so it's really about being able to convert the error to a different format, not about being able to handle errors
<kentonv>
I mean, not about being able to trigger different logic based on different errors
<kentonv>
I actually do think KJ exceptions need to support that
<kentonv>
but I've never been able to come up with an approach I liked
<kentonv>
in the Workers runtime, we literally prefix error strings with e.g. "cfjs.TypeError: " to say "if this gets thrown to JavaScript, turn it into a JavaScript TypeError"
<kentonv>
which is an awful hack
<isd>
Yeah, that occured to me; I could just put e.g. ENOENT at the front of the string. But yes. that's horrible.
<isd>
What about just adding two fields to the rpc exception, a typeId and an AnyPointer, the former indicating the type of the latter?
<isd>
I guess this is a little less clear for the C++ implementation because kj::Exception is theoretically not dependent on capnp?
<kentonv>
I think to be convinced of any design here, I would need to look at a lot of use cases to verify that it fits and that it doesn't lead to abuse.
<isd>
Yeah, I think fundamentally this is the kind of thing where if you were doing a synchronous API, it should just be an error code.
<isd>
So figuring out how to do pipelining on conditionals is "The Right Thing."
<kentonv>
if you pipeline on a pointer, and it turns out to be null, then the pipelined call will fail, and upon detecting that failure you can perhaps go back and check the result of the earlier call to see if it had an error code?
<isd>
I mean I guess that would work. But it feels both more error prone and like even more of a hack than abusing exceptions for "everyday" errors...
<isd>
I dunno, maybe it wouldn't be that bad. I should experiment.
<isd>
But it seems like it doesn't really solve the conceptual wrongness of using exceptions for error handling; you're still doing that, just having to check the data for it out of band...
<kentonv>
it's difficult because errnos are *usually* used for reporting only but *occasionally* people trigger logic based on them. So they *usually* map to KJ exceptions but... not quite always
<isd>
So I'm envisioning an api where you can pipeline on a union variant, and it fails if it's the wrong variant.
<isd>
This would also allow you to pipeline different branches, and be sure only one of them will actually execute. You just make the calls and then wait for the original union and check which result you should use...
<isd>
Haven't thought it through deeply though.
<kentonv>
hmm, "ideally" you'd make one call that somehow returns a union result, but I don't know what the API would look like
<isd>
The protocol upgrade for that seems hairy though; what do you do if the receiver doesn't know how to dispatch like that?
<isd>
(for my thing that is)
<kentonv>
well, what we're talking about here is adding new operations to `PromisedAnswer.Op` (in rpc.capnp)
<kentonv>
I'm trying to see what the RPC code does if it receives an unknown op
<isd>
The Haskell implementation returns an exception.
<isd>
Maybe it should return an unimplemented message instead?
* isd
looks to see what the Go implementation does
<kentonv>
unfortunately it appears the C++ implementation throws an exception. I was hoping for an Unimplemented message, yeah.
<isd>
...and that brings us back to square one of not being able to tell why an error happend...
<isd>
The Go implementation also throws an exception.
<kentonv>
well... that was clearly a mistake on my part, not creating a way to introduce new pipeline ops with the ability to fall back if unimplemented
<isd>
I hate this idea, but it would probably at least work: we could add a new _message_ type, that's basically just 'Call, but you understand what to do with unknown Ops. <long paragraph about historical design mistake>.'
<isd>
...and we'd say regular "call" is only allowed to use the existing options...
<kentonv>
eh, I don't know if it's worth working that hard to accommodate existing implementations, vs. just saying "you gotta upgrade Cap'n Proto before you can use this protocol"...
<isd>
Yeah, I guess we could just document it with the caveat that you shouldn't use newer pipeline ops with implementations that don't do the the right thing here.
<isd>
Just expect a message.unimplemented for unimplemented things and if the receiver throws an exception we just treat it as normal.
<kentonv>
I mean it may be worth fixing the implementations so that they treat unknown pipeline ops reasonably going forward
<isd>
Yeah. But we should also probably mention in docs somewhere that older versions of implementations might have this problem.
<kentonv>
right
<isd>
I guess the good thing about having a fairly small number of prod quailtiy rpc implementations is that tracking them all down is doable.
<isd>
I'll put updating the haskell implementation on my TODO list then.
<kentonv>
I mean... it's ugly, but we _could_ also string-match the known error messages for unknown pipeline ops from the known implementations... there's not many of them
<isd>
Yeah, but we'd have to impose that burden on every implementation.
<isd>
It also means you could trigger the library re-trying a method call erroneously by throwing an appropriate exception from user code
<isd>
Like, I'm imagining bugs where somebody shoves a malcious string in a long and it ends up in an exception messsage, and causes some side-effect to happen twice...
<isd>
s/long/log message/
<isd>
Going afk for a bit.
<abliss>
I wonder if User-Mode Linux could be configured to export/provide its pty implementation, while allowing other syscalls to go directly to the host OS for performance.
frigginglorious has quit [Read error: Connection reset by peer]
vertigo_38 has quit [Remote host closed the connection]
vertigo_38 has joined #sandstorm
prompt-laser has quit [Quit: Connection closed for inactivity]