kentonv changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev
frigginglorious has quit [Read error: Connection reset by peer]
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #sandstorm
CcxWrk has quit [Read error: Connection reset by peer]
CcxWrk has joined #sandstorm
xet7 has quit [Ping timeout: 260 seconds]
nwf has quit [Quit: WeeChat 2.7.1]
nwf has joined #sandstorm
frigginglorious has joined #sandstorm
<JacobWeisz[m]> Looks like we might have an interesting Let's Encrypt related thing.
<JacobWeisz[m]> I suspect the requestor's Sandstorm configuration is valid, it seems to be an issue that Alpha would presumably hit if it was using the new code.
<isd> Is there a reason why we don't have the GitHub action upload the build tarball as an artifact? Was it a space issue?
<isd> It would be nice if we ask folks to run builds downloaded from CI, rahter than them having to build from source to test fixes.
<JacobWeisz[m]> It seemed a bit overkill, tbh.
<JacobWeisz[m]> Apparently we don't pay for the space either way, fun fact.
<isd> They do delete them after 90 days.
<JacobWeisz[m]> But arguably if you make a branch on your fork adding that line in, it should build it, right?
<isd> Yeah. I may push another commit to that branch just to trigger that. But it's annoying to have to push an extra commit and then later force-push it out
<JacobWeisz[m]> I think at the time it didn't seem like anyone was going to use them for anything. But since it doesn't cost us anything I suppose there's no harm in re-adding that to the action if you expect to suggest people use them.
<JacobWeisz[m]> I do kinda like the idea of being able to give someone fixed code then and there (within the hour in this case!!!!)
<isd> Yeah, I think it would be useful to be able to have a build right there for folks to test from time to time.
<JacobWeisz[m]> IMHO add it back in that PR and leave it added then. I recall verifying with Kenton it didn't count against the sandstorm-io account's quotas, which was an initial concern.
<JacobWeisz[m]> If it's a private repo in a org, they bill for storage.
<isd> If GitHub is covering the storage I say we use it.
<kentonv> there might be a limit on the total storage though?
<JacobWeisz[m]> I didn't see anything suggesting there was, but if it isn't billing, I guess what would happen if we hit a limit is it'd stop working until we deleted some stuff?
<isd> We can delete stuff manually. I think worst case we could write a script to periodically delete stuff sooner than their 90 day horizon.
<JacobWeisz[m]> Cuz there's a limit for free usage of storage, but the public project stuff just straight up wasn't counting against it.
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #sandstorm
<JacobWeisz[m]> I feel deleting stuff manually is likely to be burdensome, Actions is a bit aggressive on running multiple builds for every little push it seems.
<JacobWeisz[m]> I think the "add it in a commit then force push it away" method is less burden, considering the frequency we need the builds from it.
<JacobWeisz[m]> But I still don't think at present there's technically any downside to having the Action uploading every single one.
<JacobWeisz[m]> Except maybe giving Microsoft a reason to not give open source projects tons of free storage and compute.
<isd> I wonder what they're doing on the backend. You could probably keep it pretty managable using the kind of hash-based chunking and deduping you see in e.g. bup & perkeep, but I don't know of any public project that would be quite suitable for that without modification.
<isd> ...but you could definitely build something such that a project like ours uploading a couple hundred mb for every build doesn't actually get out of hand.
<isd> ...since they'd be mostly the same.
<JacobWeisz[m]> Tarballs probably aren't dedupe friendly, are they?
<isd> The algos those systems use should probably still work
<isd> A naive file or block based chunking algorithm would not be very useful.
<isd> I really want a datastore like this that supports easy deletion (which perkeep does not), since it would make it really easy to just take daily backups of grains without space usage getting out of control.
<isd> One of many things in my mental backlog
frigginglorious has quit [Read error: Connection reset by peer]
<isd> Oh, I misremembered, the tarball is "only" 70M
frigginglorious has joined #sandstorm
frigginglorious1 has joined #sandstorm
frigginglorious has quit [Ping timeout: 246 seconds]
frigginglorious1 is now known as frigginglorious
<JacobWeisz[m]> I still really think on-grain-shutdown backups is the key to quick and low-storage abuse backups.
<JacobWeisz[m]> Over 95% of my grains are not used... ever, they're just stored documents.
<isd> Right, we should not take a backup if the grain hasn't run since the last backup
<JacobWeisz[m]> I don't know if that's typical usage, but if so, any all-grain regular backup strategy is pretty wasteful.
<isd> That's true for me as well; most grains sit there and aren't run almost ever.
<isd> "On shutdown" is mostly a good strategy, except that if a server is interrupted in the middle of a backup then you lose the backup.
<isd> So you probably want it to re-try at some point.
<isd> I think a periodic job that looks for grains that haven't been backed up since their last shutdown is a good option.
<JacobWeisz[m]> That's reasonable.
frigginglorious has quit [Read error: Connection reset by peer]
frigginglorious has joined #sandstorm
frigginglorious has quit [Read error: Connection reset by peer]