<olleolleolle[m]> @headius:matrix.org: Here's a Puma PR which does away with the extension which you helpfully patched. Just Ruby - perhaps it can become faster, even.
<olleolleolle[m]> Not yet tried on JRuby, but MRI looks good.
_whitelogger has joined #jruby
sagax has quit [Quit: Konversation terminated!]
sagax has joined #jruby
rusk has joined #jruby
shellac has joined #jruby
drbobbeaty has quit [Ping timeout: 245 seconds]
_whitelogger has joined #jruby
drbobbeaty has joined #jruby
drbobbeaty has quit [Quit: Textual IRC Client: www.textualapp.com]
drbobbeaty has joined #jruby
<rdubya[m]> Just released activerecord-jdbcsqlserver-adapter 51.0.0 with support for ActiveRecord 5.1 :) https://rubygems.org/gems/activerecord-jdbcsqlserver-adapter
<rdubya[m]> it didn't even require any changes to the arjdbc gem 🙂
shellac has quit [Quit: Computer has gone to sleep.]
<headius[m]> Very nice! I will tweet something out today
<headius[m]> 5.2 maybe more of the same?
lucasb has joined #jruby
shellac has joined #jruby
<rdubya[m]> I hope so but haven't started the process yet
<rdubya[m]> hoping I'll have some free time this week/weekend to get it out for rubyconf
<headius[m]> Well this is great progress already...I'm glad it was trivial
<rdubya[m]> lopex:
<rdubya[m]> * lopex: any word on the alpine images?
<rdubya[m]> the riot interface could use some work on the mentions...
<headius[m]> olleolleolle: hey if the performance is no problem I'm good with it
<headius[m]> I'm running some updated webapp numbers today so I'll give it a try
<headius[m]> olleolleolle: oh I see...it's just the IOBuffer...yeah I wouldn't expect this to affect perf too much
<enebo[m]> rdubya: I wonder if you should align next releases to match arjdbc point release number
<enebo[m]> I guess the issue then would be what if you rev sqlserver more than base gem
cbruckmayer[m] has joined #jruby
<cbruckmayer[m]> 👋
<enebo[m]> Actually without getting into a discussion of using third value you should just do what you did
<enebo[m]> latest of each for the major is totally legit
<rdubya[m]> cool
<headius[m]> cbruckmayer: 👍
<cbruckmayer[m]> I'm flying in to Nashville on Sat evening btw. And I should have some space in my luggage if there are any beer requests from the south west of England 🍻
<headius[m]> Nice! If there's a special or favorite local you could bring some for the little beer share we do
<enebo[m]> cbruckmayer:or anything we are unlikely to have heard of :)
<headius[m]> olleolleolle: doesn't look like this has been tested on JRuby at all...the HTTP extension still references the Java IOBuffer
<headius[m]> easy fix, just the import
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
<headius[m]> app store updating xcode just locked up my whole system 🙄
<enebo[m]> "class IOBuffer < String"
<headius[m]> yeah sort not really eliminating native
<headius[m]> olleolleolle: doesn't seem to have any negative perf impact on JRuby so far
<headius[m]> within a range I'd consider noise...like 1%
<enebo[m]> and eregon mentions that they could version some of this to do things like String.concat *args
<enebo[m]> Probably the big perf gap of puma now is that it still uses rack
<headius[m]> could be
enebo has joined #jruby
<headius[m]> huh, some interesting sampled profile info on puma + roda
<headius[m]> 2.0% 0 + 11 java.lang.Throwable.fillInStackTrace
<headius[m]> oddly a lot of thread isAlive and join
<headius[m]> java.lang.Thread.join is the #1 sampled hit
<headius[m]> I guess that's main so must just be request routing
<headius[m]> enebo: interestingly I see no rack
<headius[m]> most of the samples are in socket select, really...so aren't they keeping busy?
elia_[m] has joined #jruby
<headius[m]> I wouldn't expect the select loop to dominate if I'm hitting it hard
<enebo[m]> yeah I would guess that means lots of waiting
<enebo[m]> you sort of want to eliminate the waitish crud in profiling unless we are abnormally waiting on stuff
<headius[m]> right
<headius[m]> maybe I should choke it down to like 5-8 processing threads instead of 20 to clear this up
<headius[m]> I am doing 20 threads with 100 connections in wrk but maybe it just can't keep the 20 server threads busy
<headius[m]> puma's handle_request comes up in a few places
<enebo[m]> at some point it probably overburdens the pool....but then I would expect your results to be sub-optimal
<headius[m]> I'll give it another go and let it chew for a while so everything jits
<headius[m]> fwiw I do see the hot Ruby methods get jitted, but once they do they aren't in top ten sampled hits
shellac has quit [Ping timeout: 250 seconds]
<headius[m]> heh 0.0% 0 + 12 json.ext.Generator$Handler.generateNew
<headius[m]> enebo: so that json date thing does show up a little
rusk has quit [Remote host closed the connection]
<headius[m]> there's a lot of ExceptionBlob too so probably more non-local return or break
<headius[m]> gotta get those blocks inlining
<headius[m]> fwiw removing the to_json only bumps it up another 1000 rps
<headius[m]> still not up to 50k 😡
<headius[m]> oooo 49.8k
<enebo[m]> so oj likely will end up giving 700 rps or so but that is not nothing
<headius[m]> I think my 30s runs are getting throttled
<headius[m]> CPU temp shoots up to 100C
<enebo[m]> yeah could be...although you should see longer std dev obn requests
<headius[m]> 51.1 👍
<headius[m]> well if I run 30s chunks back to back in a loop I get occasional big drops
<headius[m]> this isn't the env I'll benchmark for rubyconf anyway
<headius[m]> ok so I am consistently around or above 50k rps with or without json
<headius[m]> json does seem to eat about 1000rps though
<headius[m]> my 30s runs were obviously getting throttled
<headius[m]> I guess it's time to go to the CLOUD
<headius[m]> I suppose I should be using IBM cloud now
<headius[m]> 🔷☁️
<headius[m]> kares: enebo: I wonder if we should repurpose the setAccessible flag we have to be a "force" mode that ignores module openness: https://github.com/jruby/jruby/issues/5969
<headius[m]> by my estimation we're doing the "right" thing now by avoiding setAccessible but there will be legacy code out there that needs to make a call and doesn't care about warnings...a single flag escape hatch might be nice
<headius[m]> on the other hand it might mean people never fix those libraries
<headius[m]> we also need to figure out a way to allow Ruby libraries to request that a module be opened before accessing it...if that's even possible
<headius[m]> also there's clearly somethign else wrong here because we go ahead and try to call the method that's not accessible
sagax has quit [Read error: Connection reset by peer]
drbobbeaty has quit [Read error: Connection reset by peer]
drbobbeaty has joined #jruby
sagax has joined #jruby
victori has quit [Ping timeout: 240 seconds]
<lopex> rdubya[m]: pinged cpuguy on that github issue
<enebo[m]> headius: yeah Java modules are difficult for us. Global option to ignore would be nice to tell people who run into the issue but leave it off by default
<enebo[m]> Overall I guess we want something where a library could say which opens they need but I have no idea how we would automatically add them
<enebo[m]> gem install posthook?
<rdubya[m]> lopex: great, thanks!
<lopex> rdubya[m]: I think we shuld find another non musl minimal distro too
<rdubya[m]> are there other's out there? Guess I haven't looked much past alpine
<rdubya[m]> I did run into https://github.com/GoogleContainerTools/distroless/tree/master/base the other day, but it doesn't even have a shell
<lopex> rdubya[m]: I've run into some ffi/ldap issues
<lopex> not ldap related directly though
<lopex> it might even be some socket issue on musl
<fidothe> enebo[m]: I finished up on my PR for https://github.com/jruby/jruby/issues/5905. Benchmark results are at: https://gist.github.com/fidothe/7d76f60cf2adbb8a2aab8adf21f40192
<fidothe> headline, with new code the string/regex paths are about equal, whereas the regex path was about 50% faster before.
<fidothe> Not sure what scope for further improvement there is but I imagine there is plenty.
<headius[m]> looks great!
<headius[m]> first line of attack for further optimization is always allocation
<headius[m]> on JDK 8 you can pass -Xrunhprof:depth=<stack depth> and it will trace all allocations with unique stack traces of that depth
<headius[m]> it's very slow
<headius[m]> there's also async-profiler JVM extension that's faster but a less accurate
<headius[m]> hprof dumps out a java.hprof.txt file with a list of top allocations at the bottom and a bunch of stack traces up top
<rdubya[m]> lopex: alpine still doesn't support anything higher than java 8 at this point either does it?
<lopex> rdubya[m]: no idea
<fidothe> headius[m]: oooh. How do you know what stack depth to put in?
<headius[m]> depends how much info you need
<lopex> MatchData is thread local isnt it ?
<headius[m]> usually needs to be more than 5 if you want to get up to the Ruby level, since there's dyncall dispatch frames in there
<lopex> does it need to be alloced every time ?
<headius[m]> I think 5 is default but will usually only show JRuby internals...but that might be plenty in this case
<headius[m]> lopex: yes
<headius[m]> well, frame local
<headius[m]> hmm
<headius[m]> well we could use a copy-on-capture sort of thing and pre-allocate them along with the frame
<headius[m]> but right now I think they usually get allocated anew
<lopex> but non versions block could get rid of it right ?
<headius[m]> like if you only accessed $1, $2 etc, or never accessed T! or $~ directly at all, we could possibly leave the object in the frame and reuse it
<lopex> I'm lost
<headius[m]> frame is method level
<headius[m]> blocks use the same frame as containing method
<headius[m]> matchdata is $~ and lives on frame
<headius[m]> we flag methods that might set or get $~ and always ensure callers have a frame
<headius[m]> so that means everywhere there's a call to "sub" or "gsub" we always prep a frame pessimistically
<lopex> but there was difference between iter and noniter versions
<headius[m]> that could be done lazily but we're never figured out a good clean way to do it without having deoptimization
<headius[m]> hmm
<lopex> since in non iter versions we cuold reuse that instance
<lopex> and only set it once
<lopex> since it doesn escape to block
<headius[m]> well blocks always force a frame
<headius[m]> oh perhaps because of recursion?
<lopex> no
<lopex> we know statically where are blocks or not
<headius[m]> if we know it's a leaf method and there's no dispatch back to Ruby then yes it could be a single threadlocal
<headius[m]> yes
<fidothe> hey, lopex: i have a load of Docker images generating in a not-quite-automated-enough way if you're interested for that Docker github issue: https://github.com/fidothe/circleci-jruby-docker
<headius[m]> so if we don't have a block we could reuse the matchdata because we know it won't be read during the scope of that call
<lopex> headius[m]: so same applies for split and scan too
<headius[m]> fidothe: omg yeah if we could get this docker thing settled it would be a load off my mind
<lopex> fidothe: oh, cool
<headius[m]> we'd just add a docker push to the release cycle
<fidothe> lopex: i need multiple JVM versions in CircleCI, so I had to roll my own
<headius[m]> lopex: yeah I suppose so
<headius[m]> we did make an improvement this spring in that $~ only sets up the frame for $~ and not for $_ or visibility or other garbage
<headius[m]> so that reduced it to one field clear on the way out plus frame stack math
<headius[m]> partial frame basically
<headius[m]> bbiab lunch
<lopex> espacially that mri has that new split taking a block
<lopex> so same problems here
<lopex> since 2.6
<headius[m]> if not for $~ and $_ we'd probably only need frame for eval/binding and blocks
<headius[m]> stupid implicit variables
<fidothe> lopex: yeah, i saw that one. headius[m]: I don't think we even need to do docker push for the official images. We need an official image repo, much like cpuguy's, that gets updated on release (that could be a build step, sure), and then Docker themselves build the images.
<lopex> fidothe: so, like from travis ?
<fidothe> lopex: yeah
<fidothe> exactly
<fidothe> need to finish reading the official image contributor docs to be sure what the exact process is
<fidothe> but you need a lot of directories with hardcoded Dockerfiles in
<fidothe> We can have a fully automated version of the status quo pretty easily, i think
<lopex> fidothe: you mean, for different jdk base images ?
<fidothe> the larger, longer term, question is what architectures, JVM versions, JDK distributions, etc are wanted
<lopex> yeah
<lopex> well, we can generate dockerfiles
<lopex> fidothe: but it always gets exponential
<lopex> whatever the distro
<fidothe> lopex: for every version of the JRuby official docker image - every tag - you have to have a separate named dir with a Dockerfile in
<fidothe> so it gets nasty for us very fast. Current images are locked to OpenJDK 8
<lopex> fidothe: but everyone who depends on jdk has the same problem right ?
<fidothe> which was the problem I had - I'm testing multiple JDK versions on CircleCI, which uses Docker, so I need multiples images...
<fidothe> lopex: everyone who is making an image that depends on JDK that other people depend on for their images does
<lopex> yeah
<lopex> since it's just a tree
<fidothe> if you're building an app, then you just pull whatever you want in.
<fidothe> It's the being a dependency that's the killer
<lopex> fidothe: same problem if you depend on manve image for example
<lopex> yeah
<lopex> *maven that is
<fidothe> yeah
<fidothe> luckily, generating a shit ton of Dockerfiles is cheap
<fidothe> and I don't c
<lopex> sometimes it;s easier to pull the dep during build than naming an image
<fidothe> believe that we have to build the images - Docker does that
<fidothe> so that's Not Our Problem
<lopex> afaik we just have to wget/curl jruby in dockerfile set the vars and that's all
<lopex> or well, if docker allows copying from url's now
<fidothe> nah, wget still
<lopex> so script is needed
<fidothe> yeah
<fidothe> we could improve some things with PGP signed tarballs, since Docker like that best, and we don't have fetch and insert SHAsums for the tarballs
<fidothe> the existing 'official' repo uses a simple bash script with Sed to stick the version and SHA in. My multi-JDK repo has a rakefile
<lopex> do they still forbid copying into an image for security reasons ?
<lopex> so only a local dir is allowed ?
<fidothe> how do you mean?
<lopex> lie COPY local_foo image_foo
<lopex> *like
<fidothe> it's at image build time not runtime tho
<fidothe> if that's what you were wondering
<enebo[m]> fidothe: just reviewing but can you verify in gsubCommon hash two setBackRef(matcher) one in the loop and one after the loop
<enebo[m]> I think we need the one in the loop as it is needed for each yield but do we need the one after the loop?
<lopex> fidothe: well we want to build images right ?
<enebo[m]> I believe it just sets the last match a second time and is not needed
<enebo[m]> lopex: yes
<fidothe> enebo[m]: should be - one in the loop, and one after. copied from the way the regexp path does it
<lopex> I think some sets should away if no block is given
<enebo[m]> fidothe: perhaps it is not needed in that path either
<fidothe> IIRC it is needed, but could probably be nicer - the loop body one applies only when a block is given
<lopex> so some of that shuld be split
<lopex> enebo[m]: we'll have the same problem for split for 2.6
<enebo[m]> lopex: this match only is create if a block is provided
<enebo[m]> and if not then it is null
<lopex> which line ?
<enebo[m]> 3345
<enebo[m]> I am pretty convinced now that I have looked. the else still is needed though
<fidothe> enebo[m]: i think probably not needed in either path. I didn't want to behave differently for that stuff since i didn't really understand enough beyond the immediate context to understand why it might be setting it again
<enebo[m]> fidothe: yeah I think so too. I will remove those
<fidothe> enebo[m]: within the loop, the MatchData gets created even if useBackref is false, so that if statement should probably be restructured too at some point
<enebo[m]> fidothe: good point
<enebo[m]> I doubt there is a side-effect there
lucasb has quit [Quit: Connection closed for inactivity]
<enebo[m]> After landing this I wondered if we could use java closure to pass in proper function for string/hash/block variants
<fidothe> it's my first Java writing, didn't feel super confident changing structural stuff that wasn't immediately the problem
<enebo[m]> fidothe: yeah that's cool.
<lopex> fidothe: thumbs up
<enebo[m]> I generally look at stuff I merge and if I find a mystery I dig in a little bit
<enebo[m]> the meta question is whether matchData is ever actually false for a block passed in
<lopex> where in doubt ask mri
<enebo[m]> lopex: yeah I think part of the problem here is this code and paths came from MRI
<enebo[m]> LOL
<lopex> enebo[m]: also some AI fuzzing should be used to guess what mri actually does
<lopex> and why
<fidothe> enebo[m]: excellent, thanks
<enebo[m]> lopex: trace back useBackref on gsubCommon
<fidothe> happy to have been able to get something done 🙂
<enebo[m]> useBackref is true in the only caller :)
<enebo[m]> lopex: so we could ask MRI why that field exists?
<lopex> enebo[m]: where, in mri or on that branch ?
<enebo[m]> Or whether perhaps we added it thinking we could eliminate it
<enebo[m]> on our master as MRI has no sgubCommon method
<lopex> well, it should mean no block is pased
<enebo[m]> fidothe: I think you have tackled a great first PR since you increased perf on something you can observe
<lopex> but then you realize it becomes an api
<enebo[m]> lopex: it is 100% true though
<enebo[m]> aha str_gsub has no such param
<lopex> yeah, I looked
<enebo[m]> It does have a local param though need_backref
<lopex> enebo[m]: but only used in string.c
<lopex> so
<enebo[m]> and only within a single method
<lopex> maybe preparation of some sort :P
<enebo[m]> so a zero sum patch is to just unconditionally remove it
<lopex> MRI_NEED_BACKREF
<enebo[m]> it maybe can be a boolean for adding it in case of block since no other form should use it
<fidothe> I thought gsubFast set useBackref to false
<enebo[m]> but it will be local to both gsubCommon
<enebo[m]> fidothe: I an looking at a different version
<enebo[m]> you are correct though that version does seem to 3 false callers and 1 true caller
<enebo[m]> Although I question whether gsubFast needs to even be special since we pass in a null block but I guess you can see how twisty this all gets :)
<enebo[m]> All three false callers are ones which are not providing a block
<enebo[m]> the fourth caller might be providing a block
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]