baburdick has quit [Read error: Connection reset by peer]
notnerb has joined #rubygems
sn0wb1rd has joined #rubygems
martinisoft has joined #rubygems
qmx|away is now known as qmx
dpickett has joined #rubygems
<dpickett>
hey not sure if it's been reported and I didn't know what the protocol was, but I get errors with the dependency api both locally and when deploying to heroku
<dpickett>
the status site doesn't seem to indicate an awareness of the issue
markstarkman has joined #rubygems
jesser has joined #rubygems
jesser has quit [Read error: Connection reset by peer]
jesser has joined #rubygems
markstarkman has quit [Ping timeout: 272 seconds]
<dpickett>
I filed an issue to get clarity on what we should do in these circumstances
<dpickett>
hey indirect: per your comment on the GH issue I filed - not trying to be a pest, but seems like it was down for a few hours without an update to the status site
<qrush>
indirect: no idea
<qrush>
thats all i know how to do on heroku ;)
<dpickett>
just interested in helping out - discovered the issue and didn't really know what to do
<qrush>
indirect: sounds like that 100 limit is too low
<qrush>
and bundler is not handling it well
<dwradcliffe>
qrush: bundler handled it ok, dropped back to the full index
<dwradcliffe>
(for me)
<indirect>
qrush: what does "not handling it well" mean?
<indirect>
it says "request too large, fetching full index..."
<indirect>
and then proceeds to install successfully
<lmarburger>
qrush: so you're saying that people with > 100 gems see an error telling them that the API is down so they rightly assume that's a problem?
<qrush>
yes
<qrush>
they assume rubygems.org and bundler are broken
<qrush>
^ unless if i'm reading those tweets wrong
<dwradcliffe>
qrush: some of those might just be yanked gems
<lmarburger>
i just did a ton of research on the logs and i don't think that > 100 gem bundles is the issue
qmx is now known as qmx|away
<indirect>
maybe I need to write a blog post about this or something
<indirect>
qrush: so there are four entirely separate issues going on simultaneously :(
<indirect>
1. some s3 production gemspecs aren't available in some areas right now
<lmarburger>
it'd be nice if there was a status code that bundler understood as "please try the full index" and not interpret as "things are horribly broken"
<indirect>
2. some requests to bundler-api are getting 413s because they have too many gems for us to handle well
<qrush>
indirect: FWIW i'm working with thoughtbot to get the rubygems blog redesigned so we can have more of a public place to talk about this
<indirect>
3. some requests to bundler-api are getting 500s because of intermittent load issues
<indirect>
4. some people think that bundler-api means something it doesn't
<lmarburger>
i remember a while back we had issues with the bundler api and shut it down to force people to get the full index. everyone thought it was broken even though it installed things fine.
<indirect>
qrush: that would be great
<indirect>
lmarburger: yeah, exactly. wat :(
<indirect>
OH and 5. bundler-api is currently returning some yanked gems
<qrush>
guhhhh :(
<lmarburger>
dpickett: is everything working for you now?
<indirect>
yeah :(
<dpickett>
yessir
<lmarburger>
ok great.
<indirect>
qrush: if people actually can't bundle install, something other than the api is broken. because install doesn't require the api
<lmarburger>
dwradcliffe: is everything working for you too?
<qrush>
i think a full writeup is warranted
<indirect>
qrush: if people are having missing gemspec issues, that's a production rubygems problem that I don't even know how to diagnose :/
<qrush>
if you want to write about it
<indirect>
sure
<qrush>
we can repost to blog.rubygems.org whenever the redesign is done
<dwradcliffe>
lmarburger: yep. I only ever saw the >100 issue
<qrush>
we should point people to it
<lmarburger>
bundler api needs an endpoint on rubygems.org it can ping every minute or few minutes to get a list of gems that have changed (pushed, yanked, unyanked, etc.)
<lmarburger>
dwradcliffe: ok awesome
<indirect>
lmarburger: since time T, preferable
<indirect>
er preferably
<lmarburger>
right
<qrush>
lmarburger: i remember we talked about this forever ago
<qrush>
woo time :(
<lmarburger>
the conversation with raggi the other day got me thinking about this
<indirect>
qrush: the yanked gems issue is because the hook evan wrote is failing some unknown % of the time :(
<indirect>
I'm trying to get the manual synchronizer running, but that's failing out due to the s3 gemspec 404 issue
<lmarburger>
indirect: well hooks in general are failing
<indirect>
even though that gemspec is a totally fine 200 for my laptop :(
<qrush>
well S3 is eventually consistent
<qrush>
soooo
<indirect>
lmarburger: the bundler hooks are totally separate from the "hooks" though
<qrush>
:(
markstarkman has joined #rubygems
<indirect>
according to this job, there appear to be at least 800 yanked gems that bundler-api incorrectly thinks are not yanked :(
<qrush>
awesome
<lmarburger>
i mean we have gem pushes that aren't showing up in the api logs. there are a bunch of things that could cause that failure so not sure where the problem lies
<indirect>
lmarburger: yeah, HTTP requests are not a reliable way to communicate state changes :(
<qrush>
you guys have push to both repos
<qrush>
if you need to fix shit
<indirect>
qrush: working on it!
<qrush>
i'm not sure how the integration works out
<qrush>
but!
<indirect>
so much shit has come up in the last couple of weeks :A(
<indirect>
sigh
<indirect>
totally not expecting you to supply anything
<qrush>
it's worth considering if bringing bundler-api's code into the rails app would help
<qrush>
instead of writing a lot of complicated syncing code
<qrush>
unless if thats worth it. i'm not sure
<lmarburger>
honestly i think this would be less complicated than the current sync doe
<qrush>
lmarburger: this being?
<lmarburger>
s/doe/code
<qrush>
because now on AWS we have the capability to spin up a shitload of instances if we need them
<qrush>
and i dont feel like we should be afraid of that given the ops attention we have
<lmarburger>
this being the sync endpoint i mentioned above. the api just pings the endpoint every N minutes instead of relying on webhooks.
benchMark has quit []
markstarkman has quit [Ping timeout: 256 seconds]
<qrush>
indirect: i think you should manually remove yanked ones
<qrush>
if you need me to give you a list i can do that
<qrush>
actualyl, fuck, i can't
<qrush>
i dont have ssh or any cert shit setup.
<qrush>
fuck i need to get setup :(
<indirect>
no, it's okay
<indirect>
looks like eventually consistent was eventually consistent
<indirect>
phew
<qrush>
./play clowntown
mr_ndrsn_ has joined #rubygems
mr_ndrsn_ has quit [Client Quit]
<dwradcliffe>
newrelic_rpm 3.5.6.46
<dwradcliffe>
net-ssh-multi 1.1.1
<dwradcliffe>
net-ssh-multi 1.1.2
<indirect>
VICTORY
<indirect>
# of gem versions added: 6
<indirect>
# of gem versions yanked: 889
<dwradcliffe>
indirect: nice!
onemanjujitsu has quit [Quit: onemanjujitsu]
tenderlove has quit [Remote host closed the connection]
<qrush>
yayaya
<qrush>
I'll tweet about that
<qrush>
i'm out for now
yut148 has joined #rubygems
huoxito has joined #rubygems
_maes_ has joined #rubygems
mockra has quit [Remote host closed the connection]
caleb_io has joined #rubygems
markstarkman has joined #rubygems
onemanjujitsu has joined #rubygems
eighthbit has quit [Quit: Bye.]
markstarkman has quit [Ping timeout: 272 seconds]
drbrain has quit [Ping timeout: 252 seconds]
the_mentat has quit [Quit: Computer has gone to sleep.]
drbrain has joined #rubygems
onemanjujitsu has quit [Quit: onemanjujitsu]
anon4224124 has joined #rubygems
onemanjujitsu has joined #rubygems
crandquist has quit [Read error: Connection reset by peer]
crandquist has joined #rubygems
caleb_io has quit [Quit: caleb_io]
onemanjujitsu has quit [Quit: onemanjujitsu]
onemanjujitsu has joined #rubygems
tenderlove has joined #rubygems
tenderlove has quit [Remote host closed the connection]
<vertis>
dwradcliffe: not sure that cleared up much for me
<vertis>
except that I need to make sure that qrush has what he needs to deploy
<revans>
I believe the failure is happening because gems were yanked when rubygems was "hacked". So, bundler fails because the gem version you have your Gemfile locked is, does not exist on rubygems.
<qrush>
vertis: yes it now blocks large requests of gems
<qrush>
indirect: you still here? :)
<vertis>
lmarburger added explicit many gems
<vertis>
exception
<vertis>
020c4f6f211df521e469042e8102740dd2bb92b9
crandquist has quit [Quit: Bye!]
<vertis>
qrush do you have the ability to deploy the bundler api?
<qrush>
i do
<vertis>
Can I revert those changes
<vertis>
or is there a specific reason they were added?
<qrush>
i dont think so
<qrush>
large payloads were causing the service to be impacted
<qrush>
bundler still uses the full source index, its just slower
<vertis>
VERY slow
<vertis>
Bundler does this when it has more than one source
<qrush>
right...the slow index :(
<vertis>
I mean 10 mins plus
<qrush>
i dont think the 100 limit is realistic and it needs to be bumped
<qrush>
but at the same time it's 11PM and i dont want to babysit it tonight :(
<vertis>
I can babysit it
<qrush>
i'm not sure why they chose 100 but clearly its affecting a lot
<vertis>
Let me just get you a pull request that has a larger number
<vertis>
qrush: so the preference is to bump the number? 200?
<qrush>
i am not comfortable with deploying that change without hone, indirect, or lmarburger present
<qrush>
i dont know the bundler-api codebase, how to tune it, etc
<vertis>
Are they contactable?
<qrush>
my phone is dead, i have hone's number but he hasnt responded tonight so far. i dont have the others' phones
<qrush>
tweeted
<indirect>
qrush: just got home
<indirect>
hone is in south africa
<indirect>
pretty sure calling him won't work
<indirect>
:)
<indirect>
uh
<vertis>
indirect: can I make a pull request that bumps that number?
<indirect>
vertis: won't help you, isn't your problem
<vertis>
Sure it is
<vertis>
My bundle update is requesting 122 gems
<indirect>
yeah
<vertis>
there is a 100 limit that has just been installed
<indirect>
…and?
<indirect>
no
<indirect>
ugh
<indirect>
sorry
<indirect>
not you
<indirect>
I clearly need to do something to make this not crazy, though
<vertis>
indirect: I'm confused
<indirect>
vertis: the bundler dependency API functions as a layer on top of rubygems.org
<vertis>
sure
<indirect>
IF the api is up, IF you have bundler > 1.1, and IF you have < 100 gems in your gemfile, the API will allow bundler to fetch gem metadata in less time
<vertis>
Prevents needing to fetch the full source index
<vertis>
which is good
<indirect>
if any of those conditions are not true, bundler will download the full source index
<indirect>
the thing is
<indirect>
either way
<indirect>
the exact same bundler code runs
<vertis>
No
<indirect>
once the fetch is done?
<indirect>
yeah
<indirect>
it is
<indirect>
I just refactored it
<vertis>
I mean I haven't got 100 gems in my gemfile
<indirect>
bundler doesn't know the URLs for those gemspecs
<qrush>
he said it took 10 minutes for that
tenderlove has quit [Remote host closed the connection]
<indirect>
that place? where it hung until he quit it?
<indirect>
that's the resolver
<indirect>
…adding more logging right now
<vertis>
That's it fetching the full index
<indirect>
as soon as bundler says "Fetching full source index"?
<vertis>
I can post the verbose log if you'd like
<indirect>
the first 10 seconds is the full source index
<indirect>
everything after that is the resolver
<indirect>
vertis: run `DEBUG_RESOLVER=1 bundle update`
<indirect>
you can watch the resolver run for ten minutes
<indirect>
I promise you it will not hang at the source index line
<vertis>
indirect: what should I be seeing
<vertis>
cos it's the same as before
<vertis>
Hmmm
<vertis>
there we go
<indirect>
vertis: I can't reproduce this for you, because of your internal repo
<indirect>
but I do know bundler fairly well
<indirect>
and ten minutes is not the source index fetch
<vertis>
Seems like the internal repo causing this
<vertis>
but I don't understand why
<indirect>
so… not my code change
<indirect>
qrush: it's really not broken, I promise :(
<indirect>
I am working on fixing the messaging
<indirect>
sorry you had to deal with it
<vertis>
indirect: Fascinating
<vertis>
indirect: thank you
<qrush>
i think a bundler point release is worth it to have a better error message
<qrush>
that doesnt blame the service
<indirect>
qrush: agreed
<teancom_>
let it be resolved, that it is impossible to use the word 'fascinating' without sounding *super* sarcastic.
<vertis>
teancom_: :(
<teancom_>
vertis: :-P
<vertis>
indirect: I apologise
<teancom_>
vertis: I know you didn't mean it that way, but *man*, I can't read it any other way in my head...
<qrush>
lol teancom_
<vertis>
indirect: I don't understand why it's trying to resolve all 122 gems from the internal repo
<vertis>
to begin with
<indirect>
vertis: what do you mean "resolve"?
<indirect>
it's trying to know their names, version numbers, and dependencies
<indirect>
so that they can _potentially_ be resolved
<indirect>
it's not trying to resolve them
<vertis>
sure
onemanjujitsu has joined #rubygems
onemanjujitsu has quit [Remote host closed the connection]
<vertis>
But it knows THEIR version from downloading the details from my repo
<indirect>
yes?
<indirect>
it doesn't know any of their dependencies' names
<indirect>
rubygems doesn't provide that
<indirect>
the only way to get a list of all the dependency names would be to download every gem one at a time
<indirect>
because that would take a million yaers
<indirect>
years
<indirect>
it just proactively fetches ALL the names and versions from rubygems.org
<indirect>
less time is spent
<indirect>
everyone wins
<vertis>
It's still running FYI
<indirect>
yeah
<indirect>
dependency graph resolution is an NP complete problem
<indirect>
your gemfile leaves the problem space 100% open
<indirect>
in fact, you explicitly say in your gemfile that you will accept literally any version of every gem
<indirect>
which means bundler has to try every single possible combination until it finds one that works
kseifried has joined #rubygems
kseifried has quit [Changing host]
kseifried has joined #rubygems
<vertis>
indirect: hmmm, good tip
<vertis>
Let me lock some versions and see how that goes
<teancom_>
<- *not* a developer, so let's start there. But wouldn't a reasonable approach to this be "try first with the latest version of every gem listed, and only try more if that doesn't work for some reason"? That *should* work out most of the time, right? Or is this a problem because that *doesn't* work (in this case)?
<indirect>
oh god
<teancom_>
Feel free to tell me to stfu :-)
<indirect>
sorry, I don't really have time to discuss dependency graph resolution algorithms :)
<teancom_>
(I repeat - not a developer)
<indirect>
the tl;dr is "no, if that worked, it would have been done in less than half a second" :)
<teancom_>
ah, ok.
<Defiler>
Low-memory machines perform better doing excessive network traffic but avoiding a full index load, or at least they used to
<Defiler>
As an aside
<indirect>
Defiler: yes, that is true… depending on how many gems you have to fetch
<vertis>
indirect: thank you for your help
<Defiler>
I have yet to see a practical situation where the new API style isn't faster
<Defiler>
though I'll admit I expected to
<indirect>
Defiler: bundler internally fails over in that case
<indirect>
we ballparked the threshhold
<Defiler>
at work we're running this horrible hacked up copy of gemcutter that I intend to nuke from orbit ASAP
<indirect>
but it's been pretty solid
<indirect>
vertis: np
<indirect>
vertis: if you're desperate for faster resolves, build yourself a gem from bundler master, it has some pretty impressive improvements for tricky cases
<indirect>
I'll be releasing it soon
<vertis>
indirect: will do
ezkl has quit [Quit: out!]
ezkl has joined #rubygems
jstr has quit [Quit: Computer has gone to sleep.]
cowboyd has quit [Remote host closed the connection]
caleb_io has quit [Quit: caleb_io]
<vertis>
indirect: bundler really needs to print a line after the fetching source index
<indirect>
vertis: yup, adding it
caleb_io has joined #rubygems
TrevorBramble_ has left #rubygems [#rubygems]
vertis has quit [Ping timeout: 245 seconds]
cowboyd has joined #rubygems
imperator has joined #rubygems
<hone>
qrush: pong?
hahuang65 has quit [Quit: Computer has gone to sleep.]
vertis has joined #rubygems
jesser has quit [Quit: jesser]
<hone>
sorry, just got up. it was 3am when you pinged me :(
the_mentat has quit [Quit: Computer has gone to sleep.]
huoxito has quit [Quit: Leaving]
havenwood has quit [Remote host closed the connection]
markstarkman has joined #rubygems
havenwood has joined #rubygems
havenwood has quit [Ping timeout: 252 seconds]
caleb_io has quit [Quit: caleb_io]
markstarkman has quit [Ping timeout: 272 seconds]
havenwood has joined #rubygems
kgrz has joined #rubygems
jstr has joined #rubygems
notnerb has quit [Quit: Leaving.]
hahuang65 has joined #rubygems
cowboyd has quit [Remote host closed the connection]
hahuang65 has quit [Quit: Computer has gone to sleep.]
jesser has joined #rubygems
<indirect>
bleh
<indirect>
looks like rg.org is just never telling bundler ever at all when gems get yanked
<indirect>
:(
vertis has quit [Quit: Leaving.]
baburdick1 has quit [Quit: Leaving.]
jesser has quit [Ping timeout: 276 seconds]
jstr has quit [Quit: Computer has gone to sleep.]
caleb_io has joined #rubygems
Elhu has joined #rubygems
hangingclowns has joined #rubygems
<hangingclowns>
is rubygems down rigt now?
<hangingclowns>
can't seem to use bundle install, just hangs
<hangingclowns>
or even bundle update
<hangingclowns>
checked status, but it says it's up
<ezkl>
hangingclowns: Works for me.
teancom has joined #rubygems
lsegal has quit [Quit: Quit: Quit: Quit: Stack Overflow.]
markstarkman has joined #rubygems
<hangingclowns>
hmm, mine is just hanging
<hangingclowns>
trying to update rails to the latest and it just hangs
workmad3 has joined #rubygems
<hangingclowns>
first time I tried updating since about last week
<hangingclowns>
i've tried restarting it many times
<hangingclowns>
how long am I supposed to wait for? I'm running it in verbose mode, too and nothing is coming back
ddv has joined #rubygems
<hangingclowns>
I had it working for a sec, and it was trying to download specs for a windows gem, but I'm ona mac? super crazy
<hangingclowns>
for postgres gem on windows
<hangingclowns>
i saw on their rubygems status page about 5 hours ago they had downtime, so i'm not sure if it propagated to all their servers?
vertis has joined #rubygems
markstarkman has quit [Ping timeout: 272 seconds]
<hangingclowns>
restarted it again without verbose and it only has 1 dot at the end, i think it usually goes up to like 4 or 5, so it's like freezing
<hangingclowns>
i guess maybe leave it for some time?
<drbrain>
hangingclowns: rubygems doesn't print dots, so maybe you want #bundler?
<hangingclowns>
yes, sorry
<hangingclowns>
i apologize
<hangingclowns>
i also had an issue trying to update my bundler gem to the pre, so strange
<hangingclowns>
let me try agian
<hangingclowns>
trying to update my bundler to the pre, and used the verbose flag but after about 20 seconds, nothing is coming back, odd
<hangingclowns>
hmm, about at least a minute into both gem install bundler and bundle update, both have no reaction
Elhu has quit [Quit: Computer has gone to sleep.]
vertis has quit [Ping timeout: 256 seconds]
jesser has joined #rubygems
baburdick has joined #rubygems
mockra has quit [Remote host closed the connection]
<hangingclowns>
hmm, had to set an http_proxy environment variable to get it to kind of work, still fails out, though with: Network error while fetching
<hangingclowns>
for bundler
jstr has joined #rubygems
jstr has quit [Client Quit]
roolo has quit [Quit: Leaving...]
havenwood has quit [Remote host closed the connection]
<hangingclowns>
anyone else keep getting the httperror for bundler?
bhaak has quit [Read error: Operation timed out]
bhaak has joined #rubygems
havenwood has joined #rubygems
havenwood has quit [Ping timeout: 240 seconds]
bradland has quit [Read error: Connection reset by peer]
bradland has joined #rubygems
stevenharman has quit [Quit: Leaving...]
hangingclowns has left #rubygems [#rubygems]
baphled has quit [Ping timeout: 252 seconds]
stevenharman has joined #rubygems
Elhu has quit [Quit: Computer has gone to sleep.]
havenwood has joined #rubygems
havenwood has quit [Ping timeout: 240 seconds]
<lmarburger>
qrush: ugh sorry about that. the problem should never be related to the maximum threshold. indirect said he was going to treat 413s as non-errors in the next bundler.
tcopeland has quit [Quit: Leaving.]
markstarkman has joined #rubygems
<lmarburger>
and the problem with the api where it's arbitrarily limited to < 100 gems is your standard long-running db queries causing web processes to wait forever for a response and new requests can't be handled.
<lmarburger>
if it's pegged at 6, that's indicative of the backup problem. it typically runs at 1-2
<lmarburger>
also if index scans on the postgres graph spike that's probably a bad sign
gaustin has joined #rubygems
boffbows1 is now known as boffbowsh
markstarkman has joined #rubygems
terceiro has joined #rubygems
baphled has joined #rubygems
fromonesrc has joined #rubygems
x0F_ has joined #rubygems
x0F has quit [Disconnected by services]
x0F_ is now known as x0F
Elhu has joined #rubygems
yerhot has joined #rubygems
havenwood has joined #rubygems
havenwood has quit [Ping timeout: 256 seconds]
xerxas has quit [Ping timeout: 264 seconds]
kaichanvong has quit [Ping timeout: 264 seconds]
JSharp has quit [Ping timeout: 264 seconds]
patricksroberts_ has quit [Ping timeout: 264 seconds]
darix has quit [Ping timeout: 264 seconds]
cschneid has quit [Ping timeout: 264 seconds]
baphled has quit [Ping timeout: 252 seconds]
kgrz has quit [Quit: Computer has gone to sleep.]
samkottler has quit [Ping timeout: 264 seconds]
darix has joined #rubygems
darix has quit [Changing host]
darix has joined #rubygems
samkottler has joined #rubygems
cschneid has joined #rubygems
tkramer has joined #rubygems
notnerb has joined #rubygems
jcaudle has joined #rubygems
teancom has quit [Remote host closed the connection]
samkottler has quit [Changing host]
samkottler has joined #rubygems
cowboyd has joined #rubygems
anon4224124 has quit [Quit: Computer has gone to sleep.]
Plume has joined #rubygems
rmartin has joined #rubygems
gaustin has quit [Quit: gaustin]
tcopeland has joined #rubygems
benchMark has joined #rubygems
peregrine81 has joined #rubygems
the_mentat has joined #rubygems
havenwood has joined #rubygems
havenwood has quit [Ping timeout: 260 seconds]
aquaranto has joined #rubygems
aquaranto has left #rubygems [#rubygems]
teancom has joined #rubygems
Plume has quit [Ping timeout: 272 seconds]
Plume has joined #rubygems
Guest1406 has quit [Quit: Leaving...]
tubbo has left #rubygems [#rubygems]
workmad3 has quit [Read error: Connection reset by peer]
bfleischer has joined #rubygems
workmad3 has joined #rubygems
<qrush>
lmarburger: theres still some tweets today about "is bundler/rubygems down?"
Sophism has joined #rubygems
Sophism is now known as Guest40919
havenwood has joined #rubygems
hangingclowns has joined #rubygems
<hangingclowns>
is there a problem with rubygem certificates?
eighthbit has joined #rubygems
kaichanvong has joined #rubygems
patricksroberts_ has joined #rubygems
JSharp has joined #rubygems
xerxas has joined #rubygems
_maes_ has quit [Ping timeout: 272 seconds]
peregrine81 has quit [Quit: Computer sleeping.]
peregrine81 has joined #rubygems
havenwood has quit [Remote host closed the connection]
peregrine81 has quit [Quit: Computer sleeping.]
peregrine81 has joined #rubygems
nateberkopec has joined #rubygems
peregrine81 has quit [Quit: Computer sleeping.]
peregrine81 has joined #rubygems
qmx is now known as qmx|lunch
<lmarburger>
qrush: we can remove the limit so people don't think it's an error
<qrush>
lmarburger: indirect did not seem to like that idea last night
<lmarburger>
well larger gemsets = theoretical longer responses
<lmarburger>
response times
<lmarburger>
the real problem is i don't know how to solve this problem without using a ton of threads or eventmachine
<hangingclowns>
what happened to the api for bundler? why does it always error out?
<lmarburger>
it intentionally errors if your bundle has >= 100 gems
<hangingclowns>
oh, that's kind of annoying I think?
<hangingclowns>
i'm not even sure if I have 70 actually
<lmarburger>
it's only annoying because the message bundler (the gem) prints makes it sound that way
<lmarburger>
if the api isn't accessible, it downloads the full index and continues on. it's only the nasty error message that makes it seem like a bad thing.
<hangingclowns>
you know if rubygems is still blocked in china? all of a sudden doing a bundle install has been hell
<qrush>
hangingclowns: are you getting a Bundler::HTTPError ?
<indirect>
lmarburger: hangingclowns: qrush: I'm about to release 1.2.4
<hangingclowns>
yes, i was
<hangingclowns>
for bundler?
<indirect>
yes
<lmarburger>
qrush: i don't know how to answer that question because i'm not sure what the question is
<indirect>
it will fall back on the bigger index much more politely
peregrine81 has quit [Quit: Computer sleeping.]
<lmarburger>
like i said, we can remove that limit
havenwood has joined #rubygems
<hangingclowns>
whateer it is, it was like all of a sudden a pain in the ass to install these days
<hangingclowns>
if it's GFW, rubygems, or bundler, I really just want to know
<hangingclowns>
drove me bonkers, I've been at it for like over 5 hours all day today
<hangingclowns>
probably between 8-10
<lmarburger>
it's intentionally returning a 413 and i think that's the most appropriate and descriptive message. there are too many gems for the api to fetch for you.
peregrine81 has joined #rubygems
<hangingclowns>
was updating gems and trying to make my proj work for 2.0.0, but couldnt' do it
<hangingclowns>
okay, how can I be sure of that? is there a bundler method count?
<hangingclowns>
actually mine was just hanging most of the time?
<hangingclowns>
oddly hanging
<hangingclowns>
and it seemed bundler wouldnt do anything if I turned on the —verbose flag
<hangingclowns>
if I had the flag off, it would still hang, but eventually work
mephux has quit [Excess Flood]
mikeycgto has joined #rubygems
<indirect>
hangingclowns: likely a resolver issue? try DEBUG_RESOLVER=1 bundle install
mikeycgto has quit [Read error: Connection reset by peer]
<hangingclowns>
i just installed all out of date gems by hand, somehow gem install would work only sometimes
<hangingclowns>
and it seems the gems were temporarily moved? or is it always like that? I saw that when I turned on the verbose flag for gem install
mephux has joined #rubygems
<hangingclowns>
was also getting a crazy error for 1.9.3-p385 where it would reject my certificates?
<hangingclowns>
`connect': SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read server session ticket A (OpenSSL::SSL::SSLError)
<hangingclowns>
i'm reinstalling 1.9.3 as we speak with all of the homebrew opt folders, i just figured maybe my openssl was too old inside of mountain lion?
<hangingclowns>
indirect: how much longer till it's released?
<hangingclowns>
BTW, openid server is down?
mockra has joined #rubygems
mockra has quit [Remote host closed the connection]
<qrush>
openid?
<hangingclowns>
yeah, tried to register to the help site and it said the openid server is down? no idea what that means
<indirect>
hangingclowns: homebrew openssl won't be able to verify certs
<indirect>
you have to either switch from https to http or import some CA certs
<indirect>
or stop using homebrew openssl
<indirect>
the OS X built-in openssl can verify certs just fine
<hangingclowns>
but it's an old openssl
<hangingclowns>
OpenSSL 0.9.8r 8 Feb 2011
<hangingclowns>
let me try now
<hangingclowns>
it just got done compiling, actually
<hangingclowns>
Error Bundler::HTTPError during request to dependency API
<hangingclowns>
got that again, but i guess I heard that's normal since the migration
<lmarburger>
hangingclowns: you can safely ignore that error
<hangingclowns>
yes, so I was heard. I hope 1.2.4 will fix this, seems this full index is kind of a pain
<lmarburger>
in the next version of bundler (waiting for travis-ci to give it the green light), the api response will be handled more gracefully.
<hangingclowns>
please
<hangingclowns>
okay, seems good!
<lmarburger>
will it won't change the behavior of downloading the full index it just won't print a nasty exception to standard out
<hangingclowns>
actually, i compiled without the homebrew ssl, and it gave me an SSL error, so i recompiled with everything from homebrew and looks like it's FINALLY working
<hangingclowns>
still crossing my fingers
<lmarburger>
is it installing gems?
<hangingclowns>
ahh, so there's no smart way to get across this 100 gem limit?
<hangingclowns>
yes, it's finally installing gems, before it wouldnt' install at all
<hangingclowns>
literally spent the entire day on this problem
qmx|lunch is now known as qmx
ckrailo has joined #rubygems
_maes_ has joined #rubygems
<lmarburger>
hangingclowns: sorry about that :\
<hangingclowns>
it's cool, i think, like I said something is up with my side
<lmarburger>
well the way around the 100 gem limit is to use the full index. that threshold could change, of course.
<qrush>
:(
<lmarburger>
in theory it should be quicker to get the full index in one request than making a dozen large requests to the api.
<hangingclowns>
so i guess bundler will change how it works now and just always get the index?
<lmarburger>
it won't change how it works under the hood. just the messaging.
<hangingclowns>
hmm, how big is the full index? isn't there a fast way to cache everything?
<hangingclowns>
cuase now bundler just seems painfully slower
Boxcar21 has quit [Quit: Leaving...]
<lmarburger>
that's a good question. indirect would know if bundler does http caching of the index. i'm not sure.
yerhot has quit [Remote host closed the connection]
<hangingclowns>
well i guess ideally there should be like a recent checksum or something and maybe someway to go against what the recent gems are, but i guess there are like hundreds of thousands of gems, right? so could be a pain to keep up with that
<hangingclowns>
hmm
havenwood has quit [Remote host closed the connection]
<hangingclowns>
DEBUG_RESOLVER=1 doesn't seem to show anything better for verbose mode
qmx is now known as qmx|away
<hangingclowns>
or wait, sorry, spoke too soon
sn0wb1rd has quit [Quit: sn0wb1rd]
tbuehlmann has quit [Remote host closed the connection]
<hangingclowns>
hmm, still having an issue with 1.9.3 and bundle install
<hangingclowns>
seems just once in a while a get a cert error
sferik has joined #rubygems
<raggi>
lmarburger: it's not quicker to get the full index, sure on the http front maybe
sn0wb1rd has joined #rubygems
<raggi>
lmarburger: but optimizing http to be a full rate stream is trivial
<raggi>
lmarburger: (getting 100mbps out of net/http is not rocket science)
<raggi>
lmarburger: however, the full index marhsal expansion always takes a good amount of time on the client side
<lmarburger>
raggi: good to know
<raggi>
when bundler pauses for 30-60s on a full index expansion, that's ruby cpu time
<lmarburger>
ideally the 413s will be temporary until the app can handle longer requests.
<raggi>
ideally, the app won't be necessary for too much longer
<lmarburger>
maybe i should ask indirect to print a message in verbose mode to indicate when the full index download is complete and it starts handling the response
<lmarburger>
roll it back into rubygems?
<raggi>
lmarburger: i want to add an index format that is able to be written to disk, and good enough for bundler to consume at a fast rate
<raggi>
also that doesn't blow the RG server or index command up regularly
<lmarburger>
well if that's the case, i'm not going to pour any more time into this
<raggi>
basic scaling problems
<raggi>
well, indirect is basically in control of all of this, i'm focusing on trust model stuff right now
<raggi>
i offered to reprioritize if he felt it was necessary, he said no
<raggi>
also worth noting that during the/a transition, old versions are still going to want that server
Boxcar21 has joined #rubygems
<lmarburger>
so then rg.org will write that index when the index changes (gem pushed, yanked, etc.)?
<raggi>
i doubt it'll be a single index file
<lmarburger>
well i'm speaking high level
<raggi>
unless an append only format is appropriate, which i suspect won't scale well on the client side
<lmarburger>
it's over my head
<raggi>
essentially, yes
<raggi>
gem -> server -> server pushes gem + some files to s3
<raggi>
where 'some files' contains various forms of index / metadata files
<lmarburger>
great
<raggi>
the way things used to work, was that we'd just drop files in the gems dir, and then a cron would run `gem index`, problem is, that process wasn't differential, so index would take a long time. i believe this is happening on rg.org now too (the slow indexing), despite some optimization work. we basically need to move to a more scalable format, as the general design right now will have some kind of upper bound, short of somethign magical happening in ruby or c
<lmarburger>
yeah that makes sense
yerhot has joined #rubygems
hangingclowns has left #rubygems [#rubygems]
<raggi>
rg.org has some advantages over the old system, having a database handy and so on, but it's still a lot of small fragment restructuring for the larger index, which tbh, could be both split up and differential to save a bunch of cpu time each time
<lmarburger>
so in theory could bundler-api be updated to use the new index format instead of keeping its internal pg database in sync
<raggi>
lmarburger: yeah, that's almost certainly true too
<raggi>
lmarburger: the other motivation for a new index format is for mirrors
<lmarburger>
heh so bundler-api would be a mirror for old bundler clients
<lmarburger>
well that sounds like a more reasonable goal than the short-sighted plans i was considering
yerhot has quit [Remote host closed the connection]
yerhot has joined #rubygems
havenwood has joined #rubygems
<lmarburger>
raggi: so realistically what's the time investment needed? are we talking days, weeks, or months?
havenwood has quit [Read error: No route to host]
havenwood has joined #rubygems
<lmarburger>
i don't know the first thing about creating an index format
<raggi>
days
<raggi>
it just needs someone to take a look at it
<raggi>
i gave terrence a prototype of a solution to the bundler problem at rubyconf, that took <20 minutes to write
<raggi>
ideally though, we should use something that changes it a little more than that prototype, in light of recent events, etc
<lmarburger>
ok great. then there's really no use trying to re-architect the current app
<raggi>
i'm also somewhat weary that if we do somethign larger for the trust model (like TUF) that might affect indexing
<raggi>
lmarburger: i would make sure it works, certainly, and i have time allocated to this domain right now, so feel free to reach out, i'm here to help
<lmarburger>
i had planned to bug you about the syncing i was exploring for rg.org
<lmarburger>
but quite honestly i don't have time right now to pour more work into this app. it needs more than just little 5 minute fixes here and there.
<raggi>
how are you syncing right now / thinking of fixing the sync?
<raggi>
a basic implementaiton to prevent things from breaking shouldn't take too long
<raggi>
unless you have really small ram constraints, but specs.4.8 is ~50MB on a 64bit system
<lmarburger>
i was planning to sync with the rg.org database not the gem index.
<raggi>
you only need to know about unyanked gems though right?
<lmarburger>
right now it's dependent on webhooks which don't always arrive. i was hoping to add an endpoint to rg.org that would return changes.
<lmarburger>
then ping that endpoint every minute or so. no real need for webhooks.
<raggi>
yeah, that's certainly an efficient approach
<raggi>
well
<raggi>
webhooks are a nice optimization around the problem, but they can't be relied on
<lmarburger>
right that's the basic problem right now is they're assumed to be reliable
<lmarburger>
i think indirect added the full-update rake task to run every hour. that takes like 10 minutes to complete.
<raggi>
o0
<lmarburger>
that's just a band-aid solution
<raggi>
oh right, it's all in a fat normalized pg schema
caleb_io has joined #rubygems
<raggi>
does bundler really consume all this data?
<lmarburger>
if someone had the sql knowhow to tune that query and/or the db schema, that would be helpful
sn0wb1rd has quit [Quit: sn0wb1rd]
<raggi>
well, to start with, i'd cut out all the stuff that's not needed
<raggi>
there's no need for the tables to be this wide
<raggi>
afaics
<raggi>
maybe i'm missing some context
sn0wb1rd has joined #rubygems
<lmarburger>
on this subject, do you know of a good resource for this kind of application-level sql tuning (as opposed to how to optimize the pg instance)
ezkl has quit [Quit: out!]
<raggi>
my approach to optimization always starts with cutting out stuff that's not required and/or simplifying things, after that, figuring out one or more of: better algorithms, better schema, increase concurrency, increase parallelism. better algorithms includes notions of where the bottlenecks are (bandwidth/cpu/disk, etc)
<raggi>
wrt books on the topic, i'm not sure i'm a good source for that, i have more lived through many related scenarios
<raggi>
the pg docs are pretty good, but i'm not sure there are good rules of thumb for schema design, that's kinda like having a good rule of thumb for object design in OO
<raggi>
i mean, there are some principles that port between both, such as knowing when to normalize and when to denormalize, but, IME, that's rarely the first port of call
mccraig has quit [Read error: Operation timed out]
lteo has quit [Read error: Operation timed out]
sn0wb1rd has quit [Remote host closed the connection]
<raggi>
there's a lot of stuff you can do on the DDL level to optimzie things, even before using multiple servers
joewilliams has quit [Read error: Operation timed out]
<raggi>
like, partitioning, index optimization, multi-level indexes, etc
bradland has quit [Read error: Connection reset by peer]
davidjrice has quit [Read error: Operation timed out]
<raggi>
and in PG you can do all kinds of other things, like build custom indexes / custom field types
bradland has joined #rubygems
davidjrice has joined #rubygems
lteo has joined #rubygems
<lmarburger>
yeah exactly. after asking about it, i googled around for "refactoring sql" because that's basically what this would be if it where OO
<raggi>
no idea what the query planner is going to do here, but there are a lot of simple indexes to choose from here, when you probably want a composite index for this dep query
<raggi>
indirect: trivial one, but it seems to be forgotten recently
fromonesrc has quit [Quit: fromonesrc]
hahuang65 has joined #rubygems
cowboyd has quit [Remote host closed the connection]
sn0wb1rd has joined #rubygems
<lmarburger>
raggi: i'll research that. thanks for the info.
roolo has joined #rubygems
cowboyd has joined #rubygems
fromonesrc has joined #rubygems
fromonesrc has quit [Client Quit]
bfleischer has quit [Quit: bfleischer]
fromonesrc has joined #rubygems
HPL has quit [Read error: Connection reset by peer]
Elhu has joined #rubygems
HPL has joined #rubygems
<raggi>
lmarburger: so, i tinkered while in a meeting
<raggi>
lmarburger: i don't have all the table stats, as i only have the test table here, not the real data
<raggi>
lmarburger: but according to this explain, the sequence scan on versions is potentially very heavy, due to the indexed column and the fact that it can't restrict the output columns
<raggi>
lmarburger: so it's potentially lifting way more data than it needs to
<lmarburger>
raggi: hmmm.... i don't see _any_ seq scans being performed
<lmarburger>
unless i'm looking at the wrong stats table
<lmarburger>
SELECT sum(seq_scan) AS sequence_scans FROM pg_stat_user_tables;
<lmarburger>
raggi: i just pasted the output from production in case that affects the results
<lmarburger>
i guess that explains why the index scans are so much higher than the app throughput
<raggi>
yeah, your cost metrics are much different, as i suspected
<raggi>
genetic query planners are no good on test data :)
<lmarburger>
oh actually i ran that on the wrong mirror
<raggi>
haha
<raggi>
still, it's more real than mine
<lmarburger>
the costs are identical as far as i can tell but i updated it anyway
<raggi>
yeah, they're statistical, so at this point anything relatively up to date and with most of the data will be close
<lmarburger>
would an analyze help?
<raggi>
unlikely
<lmarburger>
heh ok
<raggi>
there are cases where it can, but there are also cases where it can break everything
<raggi>
well, where break everything == significant performance degredation
<raggi>
i'm not sure if those cases have changed much since pg 7
<lmarburger>
oh so are the seq scans you were seeing a byproduct of not having the full data? am i crazy or are there no seq scans in this query?
<raggi>
you are not crazy, there are no seq scans in your explain
<lmarburger>
ha ok. you mentioned them and i got worried that my stats weren't correct.
<raggi>
the postgres query planner uses a capped runtime genetic algorithm to plan queries
<raggi>
it basically puts stats and cost projections into a gene, and then runs an evaluator that sums up total cost projections, and selects the best of a number of generations
<raggi>
so if your stats are off, it'll form drastically different plans
<raggi>
one of the disadvantages of a dynamic planner like this
bfleischer has joined #rubygems
brax4444 has quit [Quit: brax4444]
baburdick has quit [Quit: Leaving.]
markstarkman has quit [Remote host closed the connection]
<lmarburger>
indirect's working on a release to silence the http error
<qrush>
do one of you mind responding?
<lmarburger>
i can do it
adam12 has joined #rubygems
<lmarburger>
actually indirect may be better because he can address the actual error. i don't know the cause of the "could not find..." errors
rmartin has quit [Remote host closed the connection]
baburdick has quit [Ping timeout: 252 seconds]
qmx|away is now known as qmx
baburdick has joined #rubygems
cowboyd has quit [Remote host closed the connection]
baburdick has quit [Ping timeout: 255 seconds]
cowboyd has joined #rubygems
mephux has quit [Excess Flood]
<raggi>
there seem to be some issues with the quick/Marshal.4.8/*.rz specs, either some mirrors are out of date or missing data, or all are missing some data
twoism has joined #rubygems
mephux has joined #rubygems
baburdick has joined #rubygems
cowboyd has quit [Remote host closed the connection]
stevenharman has joined #rubygems
baburdick has quit [Ping timeout: 245 seconds]
baburdick has joined #rubygems
stevenharman has quit [Ping timeout: 260 seconds]
stevenharman has joined #rubygems
cowboyd has joined #rubygems
bartj3 has quit []
havenwood has quit [Remote host closed the connection]
benchMark has quit []
tcopeland has quit [Ping timeout: 248 seconds]
teancom has quit [Remote host closed the connection]