<paulproteus>
This would make for a great Sandstorm package.
natea has joined #sandstorm
natea has quit [Quit: natea]
natea has joined #sandstorm
cbaines has quit [Quit: bye]
amyers has quit [Ping timeout: 248 seconds]
jadewang has joined #sandstorm
natea has quit [Quit: natea]
mort___ has joined #sandstorm
natea has joined #sandstorm
YuviPanda is now known as IdiotPanda
NOTevil has quit [Quit: Leaving]
dwrensha has quit [Ping timeout: 246 seconds]
IdiotPanda is now known as ToothPainPanda
jadewang has quit [Remote host closed the connection]
jadewang has joined #sandstorm
dwrensha has joined #sandstorm
achernya has quit [Ping timeout: 248 seconds]
NwS has joined #sandstorm
jadewang has quit [Read error: Connection reset by peer]
NwS has quit [Ping timeout: 264 seconds]
* paulproteus
waves.
<paulproteus>
A thing I am doing: trying to remember everything that I did during Open Source Bridge so I can remember who to follow up with about things.
<zarvox>
Hmm, I'm not seeing source for NEOS Server; it appears to run as a service
wat_ has quit [Quit: a]
<paulproteus>
Yeah; presumably I should ask the nice neos-server team at wisc.edu to make Sandstorm packages.
<paulproteus>
Interestingly Sandstorm + clustering is arguably a replacement for scientific computing job management stuff, though probably not quite.
<paulproteus>
But anyway, for random one-off computing by undergrads, having these things around in an app marketplace would be kind of neat.
natea has quit [Quit: natea]
natea has joined #sandstorm
dwrensha has quit [Ping timeout: 256 seconds]
<paulproteus>
Also I want to reprepro Sandstorm package, so I can dput to a Sandstorm app.
<paulproteus>
where's paultag when I need him!!
<zarvox>
paulproteus: re: scientific computing job management stuff, I don't think Sandstorm is a proper replacement - compute clusters are frequently able to distribute the workload across many many machines to get results out faster, whereas Sandstorm explicitly opts for the "single grain, single object" approach
<zarvox>
could definitely be handy for exploratory stuff, but the real meat-and-potatoes compute jobs require significantly more resource commitment
<paulproteus>
Yeah, I basically agree. I think having Sandstorm apps for exploratory stuff would be pretty stellar, though.
<zarvox>
paulproteus: if you're interested in the scientific computing niche, and what workflows users care about, I introduce you at some point to my housemate who has spent the last ~5 years running crazy large jobs on the SLAC cluster
* paulproteus
hides.
<paulproteus>
Mako has a machine at washington.edu, from what I understand, with 640GB of RAM.
<paulproteus>
So he can run RAM-intensive R jobs.
<zarvox>
Handy. I forget how much RAM my colo'd compute nodes have.
dwrensha has joined #sandstorm
jadewang has joined #sandstorm
<XgF>
paulproteus: It might be suitable for one of our clusters if it can handle ~10mio jobs pending, 100k running :-)
<XgF>
(Also if it can contain a whole normal RHEL environment :P)
* paulproteus
hides some more.
<XgF>
paulproteus: Also it needs to do memory commitment and limiting so other jobs can't cause a process to OOM :-)
<paulproteus>
Bah humbug : D
<paulproteus>
My use case is I want to use something like this during the MIT Mystery Hunt, without having to think hard about how to set it up.
<XgF>
(also the scheduler needs to manage dispatching ~10k of the queued jobs per minute with complicated and conflicting resource requirements :-))
<XgF>
(BTW, if you managed this and made it backwards compatible with everyone's legacy normal unix apps on shared storage gunk, you might be able to sell a few $$$$$ licenses :P)
<paulproteus>
I was hoping that my high-CPU job could sneak in and hog CPU time since most other people are just like using Etherpads and nother non-compute intensive things.
dwrensha has quit [Ping timeout: 276 seconds]
<XgF>
Haha
<XgF>
The kind of compute cluster we use (for Electronic Design Automation tools) is a really crazy datacenter load - nobody really designs their datacenter to handle each individual rack consuming ~10kW of power and producing as much heat
* paulproteus
's jaw drops
<XgF>
paulproteus: Blades. Piles of blades.
<mcpherrin>
20kW per rack isn't uncommon
<mcpherrin>
10kW is a fairly standard build
<paulproteus>
Clearly I've been doing servers wrong.
<paulproteus>
i,i blades, like hot knives through butter
<XgF>
mcpherrin: Perhaps. Not for commercial "rent a rack" datacenters it seems
<XgF>
(also the AC gets really expensive)
<mcpherrin>
XgF: Yeah, that's a proper high-density buildout
dwrensha has joined #sandstorm
<mcpherrin>
most discount colo space is usually 3kW or something tiny
<XgF>
Fortunately, the UK is a pretty cool place, so it turns out you don't need AC :P
<XgF>
I'm sure they exist, especially in CA. Would be a bit impractical putting the compute cluster on which we also do interactive stuff on the other side of the world though :-)
<mcpherrin>
(which is the only one that mentions density in the rack)
dwrensha has quit [Ping timeout: 264 seconds]
<XgF>
mcpherrin: London is a bit of a drive for when there is excrement - rotary cooling device interaction :-)
<mcpherrin>
10kW is dense enough you do need proper cooling, hot/cold seperation, etc
<mcpherrin>
can't just stick it in a closet and vent into the salespeople's offices next door