kgrz has quit [Read error: Connection reset by peer]
Plume has joined #rubygems
mr_ndrsn|mobile has joined #rubygems
Plume has quit [Client Quit]
benchMark has joined #rubygems
mr_ndrsn has quit [Ping timeout: 255 seconds]
mr_ndrsn|mobile is now known as mr_ndrsn
yerhot has joined #rubygems
huoxito has joined #rubygems
_maes_ has joined #rubygems
Sophism has joined #rubygems
Sophism is now known as Guest37491
aquaranto has left #rubygems [#rubygems]
imajes has quit [Excess Flood]
imajes has joined #rubygems
peregrine81 has joined #rubygems
cowboyd has joined #rubygems
Elhu has quit [Ping timeout: 252 seconds]
_br_ has quit [Excess Flood]
kgrz has joined #rubygems
_br_ has joined #rubygems
cowboyd has quit [Ping timeout: 240 seconds]
_br_ has quit [Excess Flood]
_br_ has joined #rubygems
eighthbit has joined #rubygems
_br_ has quit [Excess Flood]
_br_ has joined #rubygems
Guest37491 is now known as ckelly
cowboyd has joined #rubygems
Elhu has joined #rubygems
huoxito has quit [Ping timeout: 256 seconds]
imperator has quit [Quit: Leaving]
tbuehlmann has quit [Remote host closed the connection]
whit537 has quit [Quit: BLAM!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!]
whit537 has joined #rubygems
whit537 has quit [Client Quit]
pipework has joined #rubygems
<pipework>
I've got a project in git that has two gemspec files in the lib directory of the project that I need to put into another project's Gemfile for use. I have a `git 'git@github.private.deployment' do` block, and the names of the two gems inside that as a normal `gem 'forgery'` line.
imajes has quit [Excess Flood]
Hypn has quit [Remote host closed the connection]
imajes has joined #rubygems
imajes has quit [Excess Flood]
imajes has joined #rubygems
mephux has quit [Excess Flood]
mephux has joined #rubygems
bradland has quit [Read error: Connection reset by peer]
bradland has joined #rubygems
imajes has quit [Excess Flood]
stevenharman has joined #rubygems
mando has joined #rubygems
imajes has joined #rubygems
imajes has quit [Excess Flood]
eighthbit has quit [Read error: Connection reset by peer]
the_mentat has quit [Quit: Computer has gone to sleep.]
sn0wb1rd has quit [Ping timeout: 244 seconds]
sn0wb1rd_ is now known as sn0wb1rd
maetthew has quit [Quit: bye]
maetthew has joined #rubygems
workmad3 has joined #rubygems
imajes has quit [Excess Flood]
the_mentat has joined #rubygems
imajes has joined #rubygems
terceiro has quit [Quit: Ex-Chat]
whit537 has joined #rubygems
unsay has joined #rubygems
unsay has quit [Client Quit]
Elhu has joined #rubygems
baburdick has joined #rubygems
bradland_ has joined #rubygems
bjessbro_ has joined #rubygems
bjessbrown has quit [Read error: Connection reset by peer]
bradland has quit [Read error: Connection reset by peer]
bradland_ is now known as bradland
<raggi>
lmarburger: why not puma, with threads on?
<raggi>
lmarburger: seems like this isn't a cpu bound problem, it's a postgres bound problem, so you could probably get more out of each backend, even wiht the gvl
<raggi>
lmarburger: it is 1.9 right?
<raggi>
if it's oldschool 1.8, i can get you a 30% performance bump on thin, wiht a small bin wrapper
imajes has quit [Excess Flood]
imajes has joined #rubygems
the_mentat has quit [Quit: Computer has gone to sleep.]
rmartin has quit [Remote host closed the connection]
Elhu has quit [Quit: Computer has gone to sleep.]
cowboyd has quit [Remote host closed the connection]
<lmarburger>
raggi: hone had it running on puma but puma kept crashing
<lmarburger>
after like 10 minutes or so it would die hard
<lmarburger>
and once you get the new index format done this app won't be useful any more so i didn't care to waste time trying to debug threading :p
fromonesrc has quit [Quit: fromonesrc]
<lmarburger>
hell i even disabled preloading in unicorn because i didn't want to deal with that. just the simplest thing.
<raggi>
the latter is basically working, on the server side, and would be viable fro bundlers use - generates teh whole data set in 3 minutes
<raggi>
jruby actually clocks in at 1.5 minutes, if the IO conditions work out well
<raggi>
there seems to be some exponentially increasing File.open delay on jruby during some runs, i haven't looked into why that occurs or why it is intermittent (cc qmx|away)
buffaloburger has joined #rubygems
workmad3 has quit [Ping timeout: 252 seconds]
<lmarburger>
very interesting
roolo has joined #rubygems
roolo has quit [Client Quit]
roolo has joined #rubygems
BigFatFatty has joined #rubygems
markstarkman has quit [Remote host closed the connection]
twoism has quit [Remote host closed the connection]
mockra has quit [Remote host closed the connection]
tcopeland has quit [Read error: Operation timed out]
jcaudle has quit [Quit: jcaudle]
twoism has joined #rubygems
rduplain has joined #rubygems
pipework has quit [Remote host closed the connection]
<qrush>
lmarburger: oh awesome! it was thin's fault then?
<qrush>
geez
<qrush>
you guys should do a writeup about this
<lmarburger>
no it's not at all thin's fault
<lmarburger>
it's that with heroku's random router and the app only being configured to run one request at a time, a slow request could cause a bunch of other requests to pile up behind it
<lmarburger>
so we went from a single process app on 6 dynos to a 10 process app on 1 dyno
<raggi>
lmarburger: wait, they don't do accept backoff
<raggi>
lmarburger: unicorn should have that problem too
<raggi>
lmarburger: you need to patch thin to not pre-accept
<lmarburger>
it does except that it routes to its workers when they're free
<rduplain>
is there a way to persist --config-file setting for gem, when running setup.rb or similar?
<raggi>
lmarburger: which, yeah, might be kinda hard
whit537 has quit [Quit: Leaving]
<raggi>
lmarburger: it's possible though
<raggi>
lmarburger: in your unicorn setup, presumably you knocked the accept backlog to 1?
huoxito has quit [Ping timeout: 248 seconds]
<lmarburger>
raggi: i did basically no configuration
<raggi>
lmarburger: so yeah, you can still end up with up to 1023 connections waiting behind a slow request
<rduplain>
raggi: thanks for the pointer
huoxito has joined #rubygems
tenderlove has joined #rubygems
<lmarburger>
raggi: good to know. thanks
vertis has joined #rubygems
<lmarburger>
so i guess it's a twofold problem. the heroku router will, i think, just route requests to a process and give it 5 seconds to accept it. if it doesn't, it will try another one. rinse, repeat for 30 seconds.
stevenharman has quit [Quit: Leaving...]
<raggi>
lmarburger: right, so for appropriate balancing, you want to tune that such that the accept backlog and frequency result in a roughly fair distribution
yerhot_ has joined #rubygems
mephux has quit [Excess Flood]
mephux has joined #rubygems
<raggi>
lmarburger: right now, the OS will just accept 'everything' at the first unicorn the balancer selects (up until the 1024 socket threshold - give/take some kernel tuning stuff)
yerhot has quit [Ping timeout: 248 seconds]
<lmarburger>
yeah
<raggi>
i'd guess you're never hitting that threshold, even when under duress, as given the response rates, you probably start timing out unservicables before that happens
<lmarburger>
fortunately for us this is only running on 1 dyno fo rnow
<lmarburger>
*for now
<raggi>
unservicable backlog skew is a very common misconfiguration
<raggi>
it's hard though, unless you're on very very low latency links, which is why reverse balancers are so much better at fair balancing
<raggi>
hard to achieve good throughput / fairness balances that is
<lmarburger>
even with the thin setup we had and assuming there was a way to tune thin to only accept up to N connections, it wouldn't have helped our issues
<lmarburger>
all the processes were locked
<raggi>
yeah, thin relies on EM, and we don't have tunables in EM for that
<raggi>
sadly
<lmarburger>
is it even something you'd want for an EM app?
<raggi>
oh for sure
<raggi>
i mean it depends what you're using it for
<lmarburger>
ah interesting. i have an EM app but all it does is wait for external http calls to return. it can accept a ton of those before it's a problem
<raggi>
but being able to control the accept backlog is important in load balancing scenarios, regardless of what the app is doing or how it works internally
<raggi>
lmarburger: right, if you service all requests really really fast, then it doens't matter, in fact, in the most aggressive of those cases, you want to tune the backlog up, not down
<raggi>
but in those cases, that's not generally a load balancing problem, it's an accept rate problem - those are rare IME
<raggi>
and you actualyl have a lot of early stage OS tuning to do then anyway
<lmarburger>
so question about the server accepting requests. is it able to reject the request outright if its queue is too full or does it just ignore it?
<raggi>
the OS does that naturally - that's what the accept backlog is about
<lmarburger>
what i'm curious about is if heroku's router will try to route to a process, have that process reject it, and if heroku will send it to another process before the 5 second timeout.
<raggi>
essneitally it's a tuning param where the app is saying "don't let more than X outstanding connections hang around"
<raggi>
in practical terms, what this means is, (aside from other heuristics that make minor changes), the OS will not ACK incoming connection requests when that queue is full
<raggi>
so a smart load balancer will timeout after some reasonable threshold, and syn a different backend instead
<lmarburger>
oh so it does have to rely on a timeout?
markstarkman has joined #rubygems
<lmarburger>
that's what i was hoping to avoid. then the issue becomes if you make a request to the app and happen to be routed to a locked process, that just added 5 seconds to your response time
<raggi>
lmarburger: depends on a few other things, but you can reject too
mockra has joined #rubygems
<raggi>
lmarburger: oh, that timeout isn't liek a request timeout, all this is at the tcp level
<lmarburger>
right. the problem is the heroku router is opaque. i don't know exactly how it functions or how to play nicely with it.
havenwood has joined #rubygems
markstarkman has quit [Ping timeout: 248 seconds]
Guest21888 has joined #rubygems
Guest21888 has quit [Remote host closed the connection]
envygeeks_ has joined #rubygems
envygeeks_ has quit [Client Quit]
terceiro has joined #rubygems
<raggi>
yeah
<raggi>
some load balancers will reap backends when that happens
<raggi>
and even worse, i've heard paas stories where the paas actualyl response by deprovisioning instances in such scenarios
<raggi>
but i honestly have no idea how heroku is in this area
buffaloburger has quit [Remote host closed the connection]
yerhot_ has quit [Remote host closed the connection]
tbuehlmann has joined #rubygems
vertis1 has joined #rubygems
vertis has quit [Ping timeout: 244 seconds]
ckelly has quit [Quit: Leaving...]
mando has quit [Remote host closed the connection]
vertis1 has quit [Ping timeout: 256 seconds]
cowboyd has quit [Remote host closed the connection]
tkramer has quit [Quit: Leaving]
baburdick1 has joined #rubygems
baburdick has quit [Read error: Connection reset by peer]