isd changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev | This channel is logged at: https://freenode.irclog.whitequark.org/sandstorm/
<JacobWeisz[m]> Hmmm, DreamCompute says it's based on OpenStack?
<isd> I'm not surprised there's no cap. I think way back I reported the issue that got us to add a cap to grain logs; I had a bunch of debugging stuff going for a grain I was using for a couple months, and it made gigabytes of log files in that timespan
<isd> 200 MB is comparatively little for 5 years :P
* isd wonders how big his logs are
<isd> TimMc: were you running low on space, or just happen to notice this?
<isd> We should bound it probably.
<TimMc> Just happened to notice.
<TimMc> Partly because I was noticing some of my *own* logging was unbounded. :-)
<isd> Oh hey, I have heap snapshots from 2016 lying around
<isd> Also logs going back that far.
<kentonv> hah, sandstorm.log seems to be rotated with old logs deleted... but mongo logs go back forever
<kentonv> Sandstorm Alpha's logs only add up to 287MiB, apparently
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #sandstorm
dustyweb has quit [Remote host closed the connection]
dustyweb has joined #sandstorm
<aerth> so i can pretty much export the grains i want and install on another server and its the same, but what about invite links and user edit/view permission i wonder
<aerth> i assume it would all be zeroed
<JacobWeisz[m]> Yeah, that won't be included in a grain backup.
<JacobWeisz[m]> Any permissions won't. So things like granting network access won't be carried over when moving a grain backup either, you'd have to grant permission again.
<aerth> thats good
<aerth> so the links in Share Access->See who has access... if i click x on each one, there are no valid links floating around?
<JacobWeisz[m]> Correct
<JacobWeisz[m]> Note that if someone has access because they clicked a link when logged in, and you revoke that link, they lose access too.
<JacobWeisz[m]> So don't try to "shut off a link" unless you are intending to close off access to everyone who used that link to get it.
<aerth> what if i add them to a grain collection and then they now have access through that?
<aerth> i guess they get an email but still get access through the collection
<NekoIncardine> I wound up getting stuck on actually getting logged onto the ubuntu server, ashamedly. Like, created and all, but I found nothing to indicate what the password would be. And yes, DreamCompute is based on OpenStack
<aerth> NekoIncardine: can you pop in an iso image and boot from that ? mount and chroot and passwd?
<NekoIncardine> Don't think so. I wound up on other things for the rest of the day.
<aerth> tail -n 0 -f /opt/sandstorm/var/sandstorm/grains/*/log
_whitelogger has joined #sandstorm
_whitelogger has joined #sandstorm
_whitelogger has joined #sandstorm
<TimMc> NekoIncardine: There's nowhere in the interface to upload an SSH public key?
<NekoIncardine> I will give that a try sometime today, thanks TimMc
<mnutt> I finally have davros webpacked and running, and locally (ssd) it drops the average start time from ~180ms to ~60ms which is a good start, I think. I might be able to drop it a bit further if I can prevail against some of my dependencies that use dynamic require()s to pull in plugins or something.
<mnutt> I'm hopeful that the improvement might be even more drastic on non-ssd since it drops the number of fs syscalls from 882 to 83. Of those 83, 74 of them are two dependencies (asyncjs, an ancient async helper lib, and formidable, some sort of form parser)
<JacobWeisz[m]> I am not 100% sure we work on OpenStack. I think the big question is if the kernel allows Sandstorm to use the containerization features it depends on.
<JacobWeisz[m]> I don't see any issues about it, so maybe it's all other platforms I am thinking of that Sandstorm won't work on.
<isd> OpenStack should generally be fine, at least in a normal setup -- it's just VMs.
<isd> mnutt: that's exciting.
<NekoIncardine> Given there was back in 2016 a specific script to make it work, based on googling leading me to a Github, I have to suspect it does and me, being new to cloud hosting, just screwed up the steps, then got into my usual saturday evening gaming
<mnutt> If anybody would like to try a preview version of bundled davros, you can get it here: https://3getwj5frc1e5bner49c.mnutt.sandcats.io/v0.27.3-bundled.spk I'd be interested to see how it performs for people, relative to the release version.
<mnutt> If I clear the filesystem caches prior to loading, this new version now loads in 75ms vs 1450ms for the release version
<JacobWeisz[m]> That sounds significant!