ur5us has joined #jruby
meerohar has joined #jruby
ur5us_ has joined #jruby
ur5us has quit [Ping timeout: 264 seconds]
ur5us_ has quit [Ping timeout: 264 seconds]
ur5us has joined #jruby
brodock[m] has joined #jruby
ur5us_ has joined #jruby
ur5us has quit [Ping timeout: 264 seconds]
ur5us_ has quit [Ping timeout: 264 seconds]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (jruby-9.2:9f0725d by Karol Bucek): The build is still failing. https://travis-ci.com/jruby/jruby/builds/216745774 [171 min 59 sec]
travis-ci has left #jruby [#jruby]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (kares-patch-joda+asm:26f9220 by Karol Bucek): The build has errored. https://travis-ci.com/jruby/jruby/builds/216746484 [142 min 1 sec]
travis-ci has left #jruby [#jruby]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (kares-patch-joda+asm:42d7029 by Karol Bucek): The build failed. https://travis-ci.com/jruby/jruby/builds/216746689 [204 min 11 sec]
travis-ci has left #jruby [#jruby]
ur5us_ has joined #jruby
ur5us_ has quit [Ping timeout: 240 seconds]
travis-ci has joined #jruby
travis-ci has left #jruby [#jruby]
<travis-ci> jruby/jruby (master:83405f3 by Charles Oliver Nutter): The build is still failing. https://travis-ci.com/jruby/jruby/builds/216833971 [204 min 2 sec]
jswenson[m] has joined #jruby
<headius[m]> ok should have master back to green with latest commit
travis-ci has joined #jruby
<travis-ci> jruby/jruby (master:1715b1f by Charles Oliver Nutter): The build is still failing. https://travis-ci.com/jruby/jruby/builds/216844213 [201 min 54 sec]
travis-ci has left #jruby [#jruby]
<jswenson[m]> headius: Let me know if there is a better place to start this conversation. I'm following up on Jruby#6020 https://github.com/jruby/jruby/issues/6020#issuecomment-777683937 regarding the LambdaForm class generation and was hoping to learn a little more about when and how these would be generated. I see them all over the place in our stack traces. I have made a few attempts to reproduce a "simple" case of some of the
<jswenson[m]> things we're doing, and can see LambdaForms generated in the stack traces, but thus far have been unable to get an increase of loaded classes out of one of these experiments.
<headius[m]> jswenson: hey there
<headius[m]> jswenson: well there are two possibilities here
<headius[m]> the first is that we are somehow leaking, or causing to be leaked, lots of extra LF classes
<headius[m]> the second is that those LF classes are normal and expected and there's just a lot of them
<headius[m]> somewhere in the middle would be the possibility that we are not leaking them but also not reusing or caching them appropriately so they keep getting constructed again
<headius[m]> so in that case they would be expected, but we are putting unnecessary strain on classloading and metaspace GC
<headius[m]> if there is a real leak, or if we are creating more of these than necessary, we can fix that
<headius[m]> it seems like from your most recent comments that turning up metaspace size has fixed OOM for you but it uses a lot of (metaspace) memory?
<headius[m]> I want to narrow down which scenario we are looking at first
<jswenson[m]> Its also possible that some other dynamic jvm usage in our application is causing these as well, I found a few occurrences in stacktraces involving nashorn
<headius[m]> oh no
<headius[m]> nashorn uses invokedynamic extensively, considerably more aggressive than JRuby
<headius[m]> this is not a knock against nashorn but without newer improvements to reuse and reduce LF creation nashorn is likely the primary source of these
<jswenson[m]> We were able to get the instance stable by simply increasing the metaspace to 2GB. But now the metaspace size is fluctuating between 300 and 900MB. This isn't particularly a problem for us right now, and we've only noticed it this one time, but I am interested in learning more to understand if this will become a problem for deployments.
<headius[m]> I would expect, anyway... they use indy all the time and are very agressive about how they optimize
<jswenson[m]> That's good to know.
<headius[m]> jswenson: I wonder if you could see how much of these LF are from nashorn vs jruby
<headius[m]> for JRuby you can knock our LF load down by disabling invokedynamic usage, but are you even running JRuby with indy?
<jswenson[m]> I'd love to be able to do that, I'm not sure exactly how to track this down, especially because it look like most of these are garbage rather than live used objects.
<headius[m]> enebo: exercise for the reader: conservative mode for nashorn
<headius[m]> they really messed up abandoning that project
<jswenson[m]> I'll verify, but I believe we're currently running everything with invokedynamic ON for everything except for yield.
<enebo[m]> eh
<jswenson[m]> Ok it does look like the only reference we have to invokedynamic is disabling invokedynamic.yield so assuming that it is ON by default then it should be around for the rest of this.
<jswenson[m]> * Ok it does look like the only reference we have to invokedynamic is disabling invokedynamic.yield so assuming that it is ON by default then we should be using it for the other options.
<jswenson[m]> for reference we're still running 9.2.13.0
<headius[m]> jswenson: if that is the only option you have then it is off
<headius[m]> that posted weird
<headius[m]> ugh brb client is weird
<headius[m]> indy is off unless property jruby.compile.invokedynamic=true
<headius[m]> because of these memory and warmup effects largely
<headius[m]> if you have it off I would expect LFs to be almost entirely from Nashorn
<headius[m]> or at least mostly
<headius[m]> kalenp: I don't suppose your copy_stream issue might be related to the json issue
<jswenson[m]> Hm. Interesting. I'm trying to get caught up to speed around this from our side. I only see the single invokedynamic reference (setting a system property to disable the yield attribute)
<jswenson[m]> I seem to recall a previous discussion around it perhaps coming from our use of warbler? Or perhaps that is some other compilation step
<headius[m]> jswenson: it could be enabled for a warbler precompile but would still need to be enabled for the runtime JRuby too
<jswenson[m]> I do see LambdaForm all over the place in stack traces, but could be from something else.
<headius[m]> if you don't see it being enabled with either the JVM property or the JRuby option -Xcompile.invokedynamic it is not being used in JRuby for Ruby code (except a couple things we always use it for, like caching constants and literal values
<headius[m]> ok so we still use method handles as the function pointer to Ruby methods
<headius[m]> so any Ruby method that has been compiled to bytecode will have a couple layers of LambdaForm around it, which should reduce to a single class per method
<jswenson[m]> an aside. if invokedynamic is disabled disabling invokedynamic.yield will have no effect, correct?
<headius[m]> DMH there is the direct method handle pointing at the compiled Ruby code, and the MH there is a single layer wrapping it, probably to adjust varargs etc
<headius[m]> correct
<headius[m]> if I had your heap in hand I will try to put together a histogram of these LF and figure out whether they are from JRuby or Nashorn
<headius[m]> will/would
<jswenson[m]> unfortunately I don't think I can send it to you
<headius[m]> the actual LF classes... the LF you see in stack traces will not map 1:1 to the generated classes necessarily because many levels will reduce to a single LF class
<jswenson[m]> Not sure exactly how I would do that, but I'm happy to do it with guidance
<headius[m]> yeah I figured but that is the next piece of info you want
<jswenson[m]> 😊
<headius[m]> let me look at a heap dump here and see if it is possible to get more info about the MH/LF and where it comes from
victori has quit [Read error: Connection reset by peer]
victori has joined #jruby
<headius[m]> jswenson: sorry sidetracked on another issue for a bit
<headius[m]> ok so not seeing much in LFs that would identify where it came from but still digging
<headius[m]> at the very least we should be able to get a reference to a method signature that would show nashorn classes instead of JRuby ones
<headius[m]> jswenson: there are some SO posts and articles about nashorn and metaspace btw
<headius[m]> they might have some tips for tuning for nashorn, possibly ways to reduce how much LF overhead it has
<jswenson[m]> Yeah I've been digging a little into that
<headius[m]> jswenson: are you using nashorn pretty heavily in that app?
<jswenson[m]> We definitely CAN use it heavily, I don't know if the usage pattern on this particular instance is heavy though
<headius[m]> the problems with nashorn and indy are a big reason why JRuby lazily optimizes... so only the methods that get hit heavily ever turn into bytecode, and only the methods invoked from bytecode will use indy
<headius[m]> every source file is compiled to bytecode and every method, every call in nashorn will will use indy heavily even if only called once
<jswenson[m]> Ok that is definitely good to know. I'm going to do some analysis to see if there is a lot of that going on for this instance
<headius[m]> I will update issue with what we have discussed and some additional info
<jswenson[m]> 👍️
<kalenp[m]> jswenson: I'm pretty sure we don't turn on the additional indy calls (-Xcompile.invokedynamic). we've discussed it previously but I don't think it ever tested it enough to turn on
<kalenp[m]> headius: I don't believe the copy_stream and json issues are related, just happens that I hit the copy_stream bug around the same time I got tagged to help on the json bug
<jswenson[m]> Also looks like invokedynamic.yield is on by default while the others are not. https://github.com/jruby/jruby/blob/9.2.13.0/core/src/main/java/org/jruby/util/cli/Options.java#L105
<headius[m]> yeah but it should not be used if indy supoprt in general is off
<headius[m]> probably could make these more consistently off if indy support is off but it might make it look like they could be turned on independent of compile.invokedynamic
<headius[m]> kalenp: well thanks for the copy_stream issue... clearly the available specs have gaps
<jswenson[m]> We're surprised because the stackoverflow issue that we were seeing seemed to have gone away when we disabled invokedynamic.yield.
<kalenp[m]> fwiw, we don't actually use the bytes_written, that was just a convenient way of demonstrating the discrepancy in the bug. obv, the actual output to the socket, on the other hand... that we want to be correct 😃
<kalenp[m]> your note about the trailing CRLF is also interesting. I had noticed that the script behaved differently if I included that, but wasn't sure of the significance of it
<headius[m]> yeah the chunking logic in protocol.rb tries to send on crlf boundaries and then there is a final write for whatever is left I think
<headius[m]> but this behavioral difference between real IO and fake IO on the output side of copy_stream might warrant some clarification from CRuby
<headius[m]> I can mimic it but it feels weird to have it report that it report N when it might have written something other than N
<headius[m]> report that it wrote
<headius[m]> on the other hand it is all that it can report
<kalenp[m]> well, we can use our workaround (manually reading/writing blocks) for the time being. it's actually the existing version of the code, I just stumbled on this trying to do a cleanup. "Oh, IO.copy_stream should do this for us. Whoops!"