rusk has joined #jruby
shellac has joined #jruby
gggggggggggggggg has joined #jruby
gggggggggggggggg has quit [Client Quit]
whitingjr has quit [Ping timeout: 245 seconds]
shellac has quit [Quit: Computer has gone to sleep.]
whitingjr has joined #jruby
<kares[m]> hey, I am considering (re-)introducing some ways to limit JIT
<kares[m]> currently, on a large Rails app, it never settles since there's pretty much new code hitting the 50 call count even after 2 days
<kares[m]> meta-space seems to grow since for every method compiled there's like 3 'heavy' classes: generated class (+ one-shot loader) and usually 2 indy LambdaForm handles
<kares[m]> first, I thought this must be a bug since 9.1.17 was much more stable on meta-space usage but from the jit logs its obvious that there's always new (method) candidates
<kares[m]> seems like checking ir instruction is still problematic, exclude still a bit cumbersome and in some cases (such AR generated attributes methods) not-usable
<kares[m]> was wondering about an upper (lru style) cap on the total count kept around -> that would ideally need JRuby to track when a compiled method was last used
<kares[m]> (was also thinking soft-refs but that might toss away too much generated code too early)
shellac has joined #jruby
whitingjr has quit [Ping timeout: 245 seconds]
whitingjr has joined #jruby
shellac has quit [Quit: ["Textual IRC Client: www.textualapp.com"]]
NightMonkey has quit [Ping timeout: 245 seconds]
shellac has joined #jruby
NightMonkey has joined #jruby
lucasb has joined #jruby
travis-ci has joined #jruby
travis-ci has left #jruby [#jruby]
<travis-ci> jruby/jruby (joda-2.10.3:96cd635 by kares): The build passed. https://travis-ci.org/jruby/jruby/builds/581645898 [222 min 37 sec]
whitingjr has quit [Quit: Leaving.]
whitingjr has joined #jruby
travis-ci has joined #jruby
travis-ci has left #jruby [#jruby]
<travis-ci> jruby/jruby (joda-2.10.3:96cd635 by kares): The build passed. https://travis-ci.org/jruby/jruby/builds/581645898 [238 min 59 sec]
shellac has quit [Ping timeout: 245 seconds]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (master:dc6ccec by Karol Bucek): The build was fixed. https://travis-ci.org/jruby/jruby/builds/581659452 [231 min 58 sec]
travis-ci has left #jruby [#jruby]
<headius[m]> Tons of interest at RubyConf Thailand
<headius[m]> Half dozen conversations with folks who want to give it a try
<headius[m]> kares what about a gradually increasing threshold?
<kares[m]> nice!
<headius[m]> We definitely need to come up with better metrics for this and enebo and I have been pondering the metaspace problem
<headius[m]> Our sole metric has been 50 calls for what, ten years?
<headius[m]> More
<kares[m]> gradually, over-time ... interesting but that still means no upper bound and meta-size increasing over time
<kares[m]> yeah we need more here
<headius[m]> Well I am curious about these late methods
<kares[m]> something smart and wout cost ... as always :)
<headius[m]> What's creating them?
<kares[m]> they seem to be normal methods
<headius[m]> Subbu suggested using a complexity metric and we have talked about a time-based metric
<kares[m]> I am looking into some stats now - generated ones are from AR
<headius[m]> Hmm
<headius[m]> Also maybe we age out jitted methods not called for a while?
<headius[m]> We have a lot of directions but need to know the "why"
<kares[m]> do not think we age much - that was I was interested doing with a LRU hierarchy
<headius[m]> Oh also generating all jitted methods into same classloader for first N minutes to reduce metaspace fragmentation?
<headius[m]> Yeah
<headius[m]> Oh yeah you said lru
<headius[m]> It's all doable
<kares[m]> that might help as the allocator does not like one-cl + one class
<kares[m]> it declares OoME way to early with how it allocates chunks
<headius[m]> Yeah could try anonymous classloader in unsafe but it may have the same memory effect
<headius[m]> Metaspace beats permgen but it's not great
<kares[m]> for LRU we would need to track last call-times which I do not like
<kares[m]> * last time a method was called
<headius[m]> Right
<headius[m]> And then what, soft ref? Have to call again to evacuate? Scrub thread? All icky
<kares[m]> yep I am also a bit lost on 'good enough' options here ...
<headius[m]> Do you have a real world case?
<kares[m]> well as I noted I am seeing this on a quite large (in production) Rails app
<kares[m]> it can pbly handle 5-7 days with 1G meta-space
<headius[m]> Ok
<headius[m]> Yeah that's not good
<headius[m]> Get a jit log
<kares[m]> yy - still not sure why 9.1.17 wasn't that bad
<kares[m]> I see some inlining added but that isn't used by default ...
<headius[m]> I thought we were past the days of continuous method generation
<headius[m]> Hmm
<headius[m]> No
<headius[m]> Indy?
<kares[m]> indy if off but yes it counts for the meta cost as every JITed method gets a handle or two
<kares[m]> (even with indy off)
<headius[m]> Yeah
<headius[m]> J9? 😃
<kares[m]> :)
<kares[m]> so easiest would be resurrecting and trying out instr count as a limit
<headius[m]> Someday
<kares[m]> what seems to be the blocker there to have it?
<kares[m]> inlining messes with counts but that should not matter
<kares[m]> there's some other kind of issue on why counts aren't accurate atm
<headius[m]> To have an IR based metric? Nothing really
<headius[m]> Profiler logic uses it but has never been live
<kares[m]> heh
<kares[m]> there's a FIXME
<kares[m]> but okay this is the resulting byte-code check
<kares[m]> maybe we could just use the ir instruction count check
<kares[m]> oh the check is actually in place - missed that one :)
<headius[m]> Yeah but it's arbitrary
<headius[m]> I guessed
<kares[m]> okay I will try dropping that half from 2000 -> 1000
<headius[m]> This is all circa 2007 metrics
<kares[m]> and see what happens
<kares[m]> well it still might be good
<kares[m]> I mean the 2000 ir size default
<headius[m]> It has served us ok but these endless method things shouldn't endlessly jit
<headius[m]> Really new methods? Maybe we could detect methods defined in an eval and change metric
<headius[m]> Those are the ones to worry about
<headius[m]> Anything literally in the code will jot once unless dup'ed, maybe
<kares[m]> yeah, as noted I am seeing some evals from AR
<kares[m]> but there's still others (stable methods) coming in as well
<kares[m]> might post some stats when I done with them
<headius[m]> Stable methods seems like a bug
<headius[m]> I mean yeah after five days maybe something hits 50 but there should be a tiny number of those
<kares[m]> I have checked a few times - as I assumed a bug - but nothing seems to be compiled twice
<kares[m]> still have no good theory why JRuby 9.2 JITs more than 9.1
<headius[m]> He'll we could just stop jitting altogether after some time in an extreme case
<headius[m]> Hell
<headius[m]> It's better 😃
<headius[m]> "better"
<kares[m]> I like that
<kares[m]> unlimited by default but limitable ...
<headius[m]> You just need to bump metaspace up to 2G and it will be fine
<enebo[m]> we need to log what is being compiled
<enebo[m]> Scroll back comments now ...
<headius[m]> Yeah this is the same discussion we have been having enebo
<enebo[m]> I did play with keeping track of rate of change of method definitions for several days and execution/parsing/building clouds rate of definitions
<headius[m]> Better metrics, jit more conservatively, evacuate unused... something
<enebo[m]> I subtracted those and we there is still no good trend to be made from rate of change
<enebo[m]> well I somewhat subtracted parsing + building but execution is much more difficult
<enebo[m]> I thought about this more from a when to start JITting and not stop JITting as in not JITting until a particular point
<headius[m]> Oh yeah rate of change metric too
<headius[m]> Funny thing is anything beyond call count and we are in research territory
<enebo[m]> rate of change was the subbu idea
<headius[m]> Even Graal uses call counts
<enebo[m]> I pretty much killed that idea in my mind
<headius[m]> But Ruby is super weird
<enebo[m]> It is still an appealing idea but without total knowledge of execution time I feel it is not possible
<enebo[m]> I thought about require/load as a measurement when it slows down
<enebo[m]> but it is prey to similar issues just on a larger scale
<enebo[m]> anyways kares for stopping JITTing I think we really need to get logging in to figure out what the case is
<kares[m]> what do you mean by logging?
<kares[m]> I do have the JIT log ...
<enebo[m]> In a non-Rails server which is serving ruby scripts or the like then stopping JITting at some point would not be desirable
<enebo[m]> kares: I just mean we should see why it is not stopping
<enebo[m]> kares: for rg.org profile bench I did end up topping out on memory eventually but it was a lot of calls
<kares[m]> there's simply too much code - and still need code hitting the 50 threshold
<enebo[m]> kares: so lots of wide code which is not necesarily hit early will just keep hitting 50 calls and keep consuming more
<enebo[m]> kares: The really unfortunate part of this is that metaspace with oneshotclassloaders use way more actual memory than is needed
<kares[m]> yy
<enebo[m]> I think it reserves like 10 "things" per class loader in metaspace but we only use 1
<kares[m]> exactly heap cost is low for having these around but meta is way high
<headius[m]> We should try anon classloading at least
<enebo[m]> yeah for rg.org I think I was like 300M heap but process size maxes at 1.5G
<headius[m]> Method handles use it so maybe it isn't as bad as totally new CL
<headius[m]> This should be a priority...that's so gross
<enebo[m]> yeah I believe with 9.2.8 changes which was quite a bit of work we will be like 1.4G now but fixing the CL/metaspace problem is the skeleton in the closet
<enebo[m]> I have to almost wonder if pergem would be a better place to be now
<enebo[m]> headius: from what I remember you thought that metaspace was smaller with compile.invokedynamic enabled
<headius[m]> It seemed to be for me
<headius[m]> I don't know why
<enebo[m]> kares: here is an experiment we have not had time for but you could likely do if you are benching
<enebo[m]> kares: Change all JITting to use a single CL
<enebo[m]> kares: It is not a solution but I am wondering how much smaller the process gets
<headius[m]> Yeah try it
<headius[m]> The method leak may be less than the metaspace leak
<headius[m]> Easy flag to add too
<enebo[m]> yeah that's true it may actually be a solution when you think about it in that way since the holes of unused space never go away either
<enebo[m]> kares: headius and I talked about this a few days ago and another idea if this works is to add flag so you can specify which types need to be in their owh CLs or something like that so you potentially leak by default but you can tune those out
<kares[m]> unfortunately, trying a snapshot build might be problematic but I might try pushing that
<headius[m]> 9.2.9
<enebo[m]> kares: yeah I am pretty curious but I destroted my rg.org env
<headius[m]> I know I know need to do finish load crap
<enebo[m]> 9.2.9 I think primary feature is load/require/zeitwerk
<kares[m]> interesting, maybe having a way to exclude anonymous class JITs might help this case
<kares[m]> of the 9200 (logged) jit-ed methods almost 2000 are from 1-2 AR related (eval) places
* subbu pops in to read the log for a bit and pops out since he has nothing useful to offer for now .. :)
<kares[m]> still, this is mostly a (handy-to-have) work-around and not a good long-term solution ...
<subbu> what are all the "[m]" suffixes on your nicks though?
<subbu> ah, matrix.
<kares[m]> subbu: that we're in matrix :)
<kares[m]> it has an irc bridge setup ...
<headius[m]> subbu you have lots to offer 😀
<subbu> lol
<subbu> i wish. :)
<kares[m]> will definitely re-invent -Xjit.max since that was around in 1.7 already
<headius[m]> Load rework is still hovering at 99% good
<headius[m]> It's green but the tagged stuff should get fixed
<headius[m]> Yeah jit.max can be redefined
<kares[m]> ship it if its zeitwerk-ready! :)
<headius[m]> 99% ready? 😀
<headius[m]> It was 100% but then I fixed a spec...back and forth
<headius[m]> Frustrating
<enebo[m]> kares: AR eval places? Like created find_by_xxx?
<kares[m]> nn generated attribute methods
<kares[m]> 2019-09-04T09:15:58.758-04:00 [Ruby-0-JIT-262] INFO JITCompiler : done jitting: <anon class> #<ActiveRecord::AttributeMethods::GeneratedAttributeMethods:0x36756d4>.__temp__47970756 at /opt/releases/current/vendor/bundle/jruby/2.3.0/gems/activerecord-4.2.11.1/lib/active_record/attribute_methods.rb:47
<kares[m]> * `2019-09-04T09:15:58.758-04:00 [Ruby-0-JIT-262] INFO JITCompiler : done jitting: <anon class> #[ActiveRecord::AttributeMethods::GeneratedAttributeMethods:0x36756d4](activerecord::AttributeMethods::GeneratedAttributeMethods:0x36756d4).__temp__47970756 at /opt/releases/current/vendor/bundle/jruby/2.3.0/gems/activerecord-4.2.11.1/lib/active_record/attribute_methods.rb:47`
<enebo[m]> kares: but this only happens once per attibute?
<kares[m]> yes, should be the case
<enebo[m]> ok so 2000 attributes :) I can see the big aspect of this app now
<enebo[m]> To me this seems less like a JIT problem and more like just purely hitting the metaspace issue
<enebo[m]> I am not saying turning off JITting or some other metrics would be better than what we have but conceivably an attribute not hit early when a big Rails app starts may actually be the appropriate thing to have JITted later
<headius[m]> Rails 4.2
<enebo[m]> I have been approaching the side of JIT/iindy from the other side which is most early JIT/indy usage conceivably is not useful
<headius[m]> They may have changed that
<enebo[m]> So I have been looking at how to prune back earlier JITting by concentrating on figuring out what is hot
<enebo[m]> hotness is a difficult topic as well :P
<enebo[m]> read_attribute(attr_name) { |n| missing_attribute(n, caller) }
<enebo[m]> whatever read_attribute does
<headius[m]> Block should jit once
<headius[m]> Anything literal should not jit again and again
<enebo[m]> I don't think in kares case in 4.2 it is JITting again and again
<headius[m]> "should"
<enebo[m]> I think he has 2000 attributes in his app
<headius[m]> And after 5 days it hits 50
<enebo[m]> they just do not get hit a ton but compile anyways after n accesses
<headius[m]> Yeah
<headius[m]> It's believable
<enebo[m]> So no leak just a lot of unneeded JIT. In that sense I get why he wants a lever but I think the real problem is just a count
<enebo[m]> but as you say and even as your tweet might be showing this is not really a simple problem
<headius[m]> Another app you might want it to still jit though 🤷‍♂ī¸
<enebo[m]> yeah exactly
<rdubya[m]> probably too much overhead, but would it be possible to use a call rate instead of a call count?
<headius[m]> Nobody has volunteered a simple answer
<rdubya[m]> maybe could reset the count every x minutes
<enebo[m]> and early vs middle/late running of an app has different hot code
<enebo[m]> rdubya: yeah that would get rid of straggler access
<enebo[m]> or some of it
<enebo[m]> rate of call or rate of change but these have relative aspect as well like an raspi vs faster machine but rate of change is important in hotness
<enebo[m]> our other metric we have played with is thread_poll instrs which tend to be put within a call and on backedges of looping
<enebo[m]> rdubya: but yeah we could play with that by simply adding a timestamp field and flushing if we pass some threshold...but only once we hit the threshold count since we don't want to constantly nanotime or more expensive time call
<headius[m]> kares I know you said you may have trouble running with snapshots be we really need to suss this out experimentally
<headius[m]> We can guess and release, or add more flags and release, but we are shooting in the dark
_whitelogger has joined #jruby
<headius[m]> We could make all jitted bodies soft refs and only those cached in call sites would be hard refs
<headius[m]> We could do many things
<headius[m]> Need input
<enebo[m]> yeah the metaspace data is really needed to know what the bounds are
<enebo[m]> I feel like the talk of method hotness could solve this problem somwhat regardless but since hotness has never been simple I think we still need to deal with metaspace differently than today
<enebo[m]> but if one classloader does not change metaspace much then we really need to solve things much differently than today as well
<headius[m]> One big question I have is whether this is a problem of jitting too much or jitting too little early on
<headius[m]> Maybe these are valid things to jit
<headius[m]> I mean right now it's working as designed
<enebo[m]> too little early on really does not jive with me. Much of what is called booting rails is not used in controllers once it has started
<headius[m]> It doesn't sound like a bug...it's just the way it is designed leads to endless jitting
<enebo[m]> I cannot look at everything through a Rails lens but setup code and run code usually are life phases
<headius[m]> Because they actually do get called
<headius[m]> Yeah for sure
<headius[m]> So now those phases get spliced
<enebo[m]> yeah
<headius[m]> Higher threshold is the easy short term hack
<headius[m]> But other stuff won't jit that maybe should?
<enebo[m]> Even in the case of kares app not working as designed...count alone is prone to most of the system JITting. If the cost of JITting is small enough it is no big deal but the metaspace issue pushes this problem to the forefront a bit more
<enebo[m]> If his app is working as designed it will just slow down the growth of his app size 5-7 days will just become 10-14 days if we double the count. I am not sure what effect it will have on warmup but I predict not much
<enebo[m]> I think I misremembered what the 5-7 days was for but roll with me on the point
<enebo[m]> resetting counts based on hotness does seem like a second line of attack though
<enebo[m]> which could be thread_polls or time or both?
<kares[m]> there's definitely some early JITs that are useless ... e.g. Bundler/Rubygems
<enebo[m]> kares: I thought about JRuby::JIT.enabled=false/true
<enebo[m]> kares: then put it in your rails initializer
<enebo[m]> It will not solve your attributes all JITTing issue but it does eliminate early code which is not run again at the possible cost of startup not getting important JITs
<headius[m]> RG should never jit unless we are presenting 'gem list' perf 😀
<rdubya[m]> just so everybody's on the same page, the app kares is working on is our app 🙂
<enebo[m]> aha!
<rdubya[m]> speaking of rubygems, we have actually been running into issues with that when doing bundle updates
<rdubya[m]> it runs out of memory and we have to give it a gig so it can finish....
<headius[m]> Woah
<headius[m]> Now that's a bug
<enebo[m]> rdubya: how many gems?
<enebo[m]> in finished .lock
<rdubya[m]> i think we're pushing 150 or so at this point, let me get a more accurate count
<enebo[m]> that is not even all that many...I wonder what the deal is there
<headius[m]> Real OOM or stack OOM?
<headius[m]> Older Bundler ate stack like mad
<headius[m]> I actually hacked it to use fibers to avoid blowing stack
<headius[m]> Super gross
<enebo[m]> hmm I guess discourse is about 115 or so
<enebo[m]> so maybe I should not say it is not big
<rdubya[m]> i'll try it quick, we don't have an issue on install, just if we are updating
<headius[m]> Under 500 shouldn't be resolver stack
<headius[m]> Well under 300
<enebo[m]> update needs to grab newer versions and I suspect this is using some gems with much older version numbers
<enebo[m]> This may just be a lot of data too
<rdubya[m]> yeah, unfortunately ☚ī¸
<rdubya[m]> if i ever get the mssql adapter finished that will hopefully change
<enebo[m]> grab newer version == instantiate objects
<enebo[m]> without giving your app away if all your gemfiles are public and you don't mind you could open an issue with just the Gemfile
<rdubya[m]> down to 1 test that I can't make any sense of lol
<enebo[m]> If we can see it fail on update
<enebo[m]> gems are public I meant
<rdubya[m]> are gems aren't all public unfortunately
<enebo[m]> ah ok although if you remove those it may still happen since I bet you guys don't have dozens of newer releases of those
<enebo[m]> those == your private gems
<rdubya[m]> ```
<rdubya[m]> ```
<rdubya[m]> Your JVM has run out of memory, and Bundler cannot continue. You can decrease the amount of memory Bundler needs by removing gems from your Gemfile, especially large gems. (Gems can be as large as hundreds of megabytes, and Bundler has to read those files!). Alternatively, you can increase the amount of memory the JVM is able to use by running Bundler with jruby -J-Xmx1024m -S bundle (JRuby defaults to 500MB).
<rdubya[m]> ```
<enebo[m]> I am only guessing that if install works update is actually just a massive memory build of all newer versions of dependent gems trying to figure out if a newer version will still work
<enebo[m]> If it is then that may just be the way it is and not really a bug but how expensive it is in bundler
<enebo[m]> but a) maybe we are being wasteful somehow b) maybe it will expose something we or bundler can fix
<kares[m]> update will depend a lot on amount of gems installed, won't it?
<kares[m]> somehow got the impression that's the case a while back ...
<headius[m]> 😲
<rdubya[m]> looks like it errs out while fetching the source index
<headius[m]> Is it loading them all into memory to resolve specs?
<enebo[m]> kares: sure at least I would think it would be all gems in Gemfile.lock and instances of objects for every future version of the current gem
<rdubya[m]> this is the last line I get before it bombs `Fetching source index from https://gems.ctdc1.fs4.us/` (our internal gem server)
<headius[m]> Are there some huge gems in this?
<kares[m]> rdubya: are you trying with a clean slate of only bundle install-ed gems?
<headius[m]> That message makes it sound like big gems are unpacked in memory
<kares[m]> guess I could try as well :)
<enebo[m]> kares: if it uses more from just stuff installed but not in .lock then I don't know
<enebo[m]> It will end up being our gzip impl forcing all in-memory deflating or something :P
shellac has joined #jruby
<headius[m]> Right that's the sort of thing I am thinking
<headius[m]> This is a different issue though
<rdubya[m]> yeah, sorry to hijack the conversation, didn't expect it to derail things
<headius[m]> No problem...another fun challenge 👍
<enebo[m]> rdubya: well if you guys can change JIT to single classloader it will be good payment
<headius[m]> Y'all will need to take lead though because enebo and I are mostly spitballing here
<enebo[m]> I can easily see bundler using a lot of memory for something like this but we should figure it out
dopplergange has quit [Quit: ZNC 1.7.3 - https://znc.in]
dopplergange has joined #jruby
rusk has quit [Remote host closed the connection]
shellac has quit [Ping timeout: 268 seconds]
bga57 has quit [Ping timeout: 264 seconds]
bga57 has joined #jruby
shellac has joined #jruby
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
liamwhiteGitter[ has quit [Remote host closed the connection]
kares[m] has quit [Remote host closed the connection]
CharlesOliverNut has quit [Remote host closed the connection]
BlaneDabneyGitte has quit [Remote host closed the connection]
JesseChavezGitte has quit [Remote host closed the connection]
OlleJonssonGitte has quit [Remote host closed the connection]
rdubya[m] has quit [Remote host closed the connection]
KarolBucekGitter has quit [Remote host closed the connection]
lopex[m] has quit [Read error: Connection reset by peer]
brometeo[m] has quit [Read error: Connection reset by peer]
sillymob[m] has quit [Remote host closed the connection]
enebo[m] has quit [Remote host closed the connection]
FlorianDoubletGi has quit [Remote host closed the connection]
UweKuboschGitter has quit [Remote host closed the connection]
headius[m] has quit [Remote host closed the connection]
MarcinMielyskiGi has quit [Remote host closed the connection]
MattPattersonGit has quit [Remote host closed the connection]
XavierNoriaGitte has quit [Remote host closed the connection]
ChrisSeatonGitte has quit [Remote host closed the connection]
JulesIvanicGitte has quit [Remote host closed the connection]
ThomasEEneboGitt has quit [Remote host closed the connection]
TimGitter[m] has quit [Remote host closed the connection]
TimGitter[m]1 has quit [Remote host closed the connection]
RomainManni-Buca has quit [Remote host closed the connection]
whitingjr has quit [Ping timeout: 245 seconds]
KarolBucekGitter has joined #jruby
shellac has quit [Ping timeout: 245 seconds]
lopex[m] has joined #jruby
ThomasEEneboGitt has joined #jruby
enebo[m] has joined #jruby
CharlesOliverNut has joined #jruby
MarcinMielyskiGi has joined #jruby
JesseChavezGitte has joined #jruby
rdubya[m] has joined #jruby
JulesIvanicGitte has joined #jruby
OlleJonssonGitte has joined #jruby
headius[m] has joined #jruby
ChrisSeatonGitte has joined #jruby
FlorianDoubletGi has joined #jruby
sillymob[m] has joined #jruby
TimGitter[m] has joined #jruby
MattPattersonGit has joined #jruby
UweKuboschGitter has joined #jruby
BlaneDabneyGitte has joined #jruby
XavierNoriaGitte has joined #jruby
kares[m] has joined #jruby
liamwhiteGitter[ has joined #jruby
RomainManni-Buca has joined #jruby
brometeo[m] has joined #jruby
TimGitter[m]1 has joined #jruby
<kares[m]> yeah, might try a thing or two next week, as time allows. will also re-add the jit.max support ... single CL sounds like more work - not a resolution here but yeah less fragmentation in meta
<headius[m]> It's not a ton of work...look at JITCompiler code and how it boots jitted bytecode. Couple lines to change.
<kares[m]> rdubya: fyi: tried a `bundle update` (after a `bundle install` on a clean gemset)
<kares[m]> no issues - all went fine ... let me know if you tried a bundle update with an arg ...
<rdubya[m]> i tried `bundle update rails` but I haven't tried it on a completely clean install
<kares[m]> it fetched and updated around 40 gems
<rdubya[m]> i can't seem to get it to even load the index
<kares[m]> rdubya: and what was your jruby -v ... 9.2.8?
<rdubya[m]> i was in 9.1.17, let me try it on 9.2
<kares[m]> ooh ... okay - think this might be fixed since, but let's hear it 😉
<rdubya[m]> yeah, same command seem to work on 9.2 so 🤞 since we're upgrading 🙂
<kares[m]> cool
<kares[m]> `bundle update rails` took long but also smooth here
<kares[m]> as I said I have seen it fail with lots of gems installed but work with a clean slate
<kares[m]> its been a while - might have been 9.1.X