kentonv changed the topic of #sandstorm to: Welcome to #sandstorm: home of all things sandstorm.io. Say hi! | Have a question but no one is here? Try asking in the discussion group: https://groups.google.com/group/sandstorm-dev
<isd> (Unfortunately you can't use asan with static linking)
frigginglorious1 has joined #sandstorm
frigginglorious has quit [Ping timeout: 260 seconds]
frigginglorious1 is now known as frigginglorious
frigginglorious has quit [Read error: Connection reset by peer]
frigginglorious has joined #sandstorm
cheers has joined #sandstorm
frigginglorious has quit [Ping timeout: 264 seconds]
<CcxWrk> xet7: By magic mystery binaries I mean it uses curlbash as the only official install method and closest you can get to building from source is inspecting their dockerfile and then trying to build all it's dependencies including all the vendored patches and … ugh. That's about the point I stopped last time.
sadsand has joined #sandstorm
<sadsand> Hey there, I'm having serious issues since today with sandstorm. The last thing I did was opening a video-file that was shared with me in Davros
<sadsand> Then sandstorm crashed and won't bootup again, I also rebooted the server.
<sadsand> I will provide some logs in a moment.
<sadsand> It's strange to me, that after provoking an error (I saw something OpenSSL related) would crash the system in a persistent manner. I would've hoped to have sandstorm running until the error would be provoked again :/
<sadsand> Looks like the mongodb got corrupted?
<sadsand> I potentially tried to open a big file (maybe 100MB) inside of davros to see if I can rename it, and if I can watch it inside of davros.
<sadsand> in the mongo logs I find something which looks size related:
<sadsand> 2020-05-21T12:43:32.154+0000 [initandlisten] journal dir=/var/mongo/journal
<sadsand> 2020-05-21T12:43:32.154+0000 [initandlisten] recover : no journal files present, no recovery needed
<sadsand> 2020-05-21T12:43:32.154+0000 [initandlisten]
<sadsand> 2020-05-21T12:43:32.154+0000 [initandlisten] ERROR: Insufficient free space for journal files
<sadsand> Okay, looks like there wasn't enough free space X) sorry :)
<sadsand> Thanks for you attention. (Would be nice, if sandstorm could detect when it crashes due to insufficient free space
sadsand has quit [Quit: Connection closed]
<JacobWeisz[m]> We've talked a bit about more storage information in Sandstorm. I agree it'd be nice of Sandstorm told you when your server was low, particularly before it lacks enough space to start up.
<JacobWeisz[m]> I just had the best experience.
<JacobWeisz[m]> AOL claimed my grandmother signed up for a paid service this past February, using a payment card they had on file that expired in 2008.
<JacobWeisz[m]> And then claimed she had a past due balance due to the payment method being declined.
vertigo_38 has joined #sandstorm
frigginglorious has joined #sandstorm
larjona has joined #sandstorm
<isd> Fun times.
<abliss> AOL still exists?
<isd> Oh yes.
xet7 has joined #sandstorm
<simpson> abliss: AOL is currently a holding of Verizon; it previously was one of the most prominent brands in Time Warner's catalog, prior to their recent breakup.
<JacobWeisz[m]> And now I see how they still make money.
<JacobWeisz[m]> Signing octogenarians up for nonsense security checkup services against the expired credit cards from when they had dialup service.
<JacobWeisz[m]> FWIW, AOL has one leg up on modern tech companies.
<JacobWeisz[m]> My grandma hasn't paid for service from them in 12 years, and it took less than a minute to get a customer service rep on the phone to argue with.
<JacobWeisz[m]> I honestly think any company over a certain revenue threshold should be legally required to have phone support.
<simpson> If only small claims were easier. This is exactly what small claims is for, and it's unbalanced in the direction that one might desire when one is trying to get even with a massive corporation.
<simpson> It is kind of nice that credit cards expire. A useful lesson for capability designs.
<CcxWrk> Makes me wonder if MLS/ART would have the right ordering guarantees for multi-vat interaction.
* isd tries to build meteor from source, for the lulz
<isd> It looks like their own build scripts pull in upstream binaries for node and friends too... so just kindof pushing the buck.
<CcxWrk> gl;hf :-)
<isd> Well, that adventure didn't last long.
<CcxWrk> Yeah, I gave up about third recursion in.
<simpson> isd, CcxWrk: I hope that this gives a good example for why I believe in Nix as a useful, if transitional, build system. I asked for `meteor` in my current nixpkgs checkout, and got /nix/store/0imsk0l90pb9z0qqlsdbsd40dz0nlj0l-meteor-1.9.3 after a couple minutes of downloading and "building" from "source".
<CcxWrk> Yeah, project like this kinda needs reproducible builds IMO.
<CcxWrk> The TCB is still way too large for review though.
<isd> simpson: I had a look at their build scripts the other day; nixpkgs just downloads the binaries provided by meteor. It doesn't actually build from source.
<CcxWrk> Was about to check that. :D
<isd> I actually would like to use nix as a basis for building sandstorm. But it doesn't solve our problems with upstream.
<isd> The current blocker for doing that is https://github.com/capnproto/ekam/issues/29, if anyone wants to fight with it.
<isd> guix of course, taking a much harder line on reproducibility, just doesn't package nearly as much js stuff.
<isd> I wonder what the actual sloc difference between our transitive dependency graph and one from the python or ruby world is. Like, is it actually bigger or is it just spread across smaller packages?
<simpson> isd: Sounds like somebody who understands ekam needs to sit down with a nix-shell of that derivation. It's possible to use nix-shell to manually step through each phase of the build, inside the sandbox, and examine what's failing.
<simpson> FWIW nearly every sort of failure of this shape is some kind of hermetic failing of the original package. Could be tests trying to access the network, could be looking for tools in /usr/bin, who knows.
<isd> Yeah, I stared at it enough to know it's not the simple stuff.
<isd> Ekam's build does try to download the capnproto source, but the build script I linked there already works around that.
<isd> This looks like an actual bug in ekam that for some reason doesn't crop up when building outside of nix.
<isd> Unfortunately I think 'somebody who understands ekam' basically means kentonv.
<simpson> Well, I can't reproduce the error in the bug. Instead, I get a bunch of presumably-trivial header/include errors, and this key failing test: [FAILED ] test: kj/filesystem-disk-generic-test
<isd> Hm, can you post your log to the issue?
<simpson> Currently retrying with `doCheck = false;`
<simpson> Ah, but tests are unconditionally run, so doCheck isn't consulted. Cool. Yeah, I'll post a truncated log.
<abliss> wahoo, kcmp(2) works inside a sandstorm grain. that'll be useful or CRIU stuff (and for my pty-in-userspace hack)
<abliss> i guess that means we keep PTRACE_MODE_REALCREDS
<abliss> *PTRACE_MODE_READ_REALCREDS
<isd> Yeah, I think we block calls to ptrace() specifically, but this is some kind of internal check against a similar subsystem.
frigginglorious1 has joined #sandstorm
frigginglorious has quit [Ping timeout: 260 seconds]
frigginglorious1 is now known as frigginglorious
frigginglorious has quit [Read error: Connection reset by peer]
kentonv has joined #sandstorm
vertigo_38 has quit [Ping timeout: 246 seconds]
frigginglorious has joined #sandstorm
frigginglorious has quit [Read error: Connection reset by peer]
vertigo_38 has joined #sandstorm
frigginglorious has joined #sandstorm
frigginglorious has quit [Read error: Connection reset by peer]
frigginglorious1 has joined #sandstorm
frigginglorious has joined #sandstorm
frigginglorious1 has quit [Ping timeout: 272 seconds]
frigginglorious has quit [Read error: Connection reset by peer]