victori has joined #jruby
rusk has joined #jruby
shellac has joined #jruby
drbobbeaty has joined #jruby
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
shellac has quit [Quit: Computer has gone to sleep.]
_whitelogger has joined #jruby
shellac has joined #jruby
drbobbeaty has joined #jruby
ezzeddine[m] has quit [Remote host closed the connection]
TimGitter[m] has quit [Remote host closed the connection]
lopex[m] has quit [Remote host closed the connection]
UweKuboschGitter has quit [Remote host closed the connection]
headius[m] has quit [Remote host closed the connection]
elia_[m] has quit [Remote host closed the connection]
TimGitter[m]1 has quit [Remote host closed the connection]
FlorianDoubletGi has quit [Remote host closed the connection]
ChrisSeatonGitte has quit [Read error: Connection reset by peer]
enebo[m] has quit [Remote host closed the connection]
ThomasEEneboGitt has quit [Remote host closed the connection]
JulesIvanicGitte has quit [Remote host closed the connection]
olleolleolle[m] has quit [Read error: Connection reset by peer]
donv[m] has quit [Remote host closed the connection]
anubhav8421[m] has quit [Read error: Connection reset by peer]
OlleJonssonGitte has quit [Read error: Connection reset by peer]
BlaneDabneyGitte has quit [Remote host closed the connection]
cshupp[m] has quit [Remote host closed the connection]
brodock[m] has quit [Read error: Connection reset by peer]
asp_ has quit [Write error: Connection reset by peer]
vs-de[m] has quit [Write error: Connection reset by peer]
metafr[m] has quit [Read error: Connection reset by peer]
RomainManni-Buca has quit [Remote host closed the connection]
rdubya[m] has quit [Remote host closed the connection]
cbruckmayer[m] has quit [Remote host closed the connection]
liamwhiteGitter[ has quit [Remote host closed the connection]
CharlesOliverNut has quit [Read error: Connection reset by peer]
MarcinMielyskiGi has quit [Read error: Connection reset by peer]
voloyev[m] has quit [Read error: Connection reset by peer]
kares[m] has quit [Remote host closed the connection]
mattpatt[m] has quit [Remote host closed the connection]
XavierNoriaGitte has quit [Remote host closed the connection]
MattPattersonGit has quit [Remote host closed the connection]
KarolBucekGitter has quit [Remote host closed the connection]
JesseChavezGitte has quit [Remote host closed the connection]
dsisnero[m] has quit [Remote host closed the connection]
dsisnero[m] has joined #jruby
asp_ has joined #jruby
lopex[m] has joined #jruby
sandio[m] has joined #jruby
cbruckmayer[m] has joined #jruby
JesseChavezGitte has joined #jruby
vs-de[m] has joined #jruby
RomainManni-Buca has joined #jruby
FlorianDoubletGi has joined #jruby
TimGitter[m]1 has joined #jruby
ChrisSeatonGitte has joined #jruby
TimGitter[m] has joined #jruby
kares[m] has joined #jruby
elia_[m] has joined #jruby
headius[m] has joined #jruby
ezzeddine[m] has joined #jruby
metafr[m] has joined #jruby
cshupp[m] has joined #jruby
KarolBucekGitter has joined #jruby
XavierNoriaGitte has joined #jruby
mattpatt[m] has joined #jruby
brodock[m] has joined #jruby
OlleJonssonGitte has joined #jruby
olleolleolle[m] has joined #jruby
liamwhiteGitter[ has joined #jruby
JulesIvanicGitte has joined #jruby
anubhav8421[m] has joined #jruby
voloyev[m] has joined #jruby
enebo[m] has joined #jruby
CharlesOliverNut has joined #jruby
BlaneDabneyGitte has joined #jruby
MattPattersonGit has joined #jruby
UweKuboschGitter has joined #jruby
MarcinMielyskiGi has joined #jruby
donv[m] has joined #jruby
rdubya[m] has joined #jruby
ThomasEEneboGitt has joined #jruby
lucasb has joined #jruby
hosiawak has joined #jruby
<hosiawak> What's the recommended way to run JRuby on Rails in production ? I looked at Warbler but it creates a .war file that exits with a cryptic Rack error when deployed to Tomcat. I looked at Torquebox but turns out the project is no longer maintained. Any other options ?
<hosiawak> or is there a way to create the working .war file (Rails 6.0) myself ?
<headius[m]> Warbler is not working?
<hosiawak> headius[m]: it creates the .war file but it's not working when deployed to Tomcat, similar issue to this one: https://github.com/jruby/warbler/issues/460
<hosiawak> headius[m]: just wondering if there are any other methods to create a working .war file ? Warbler doesn't seem like it's maintained anymore too
<hosiawak> headius[m]: looks like it's failing at Bundler.setup but no idea why https://gist.github.com/hosiawak/4e19df60fc522eeb005cc3bd2f6d0ed5
<headius[m]> There are no others that I know of...warbler needs some updating to be sure.
<headius[m]> Hmm it's literally exiting?
dopplergange has quit [Ping timeout: 240 seconds]
dopplergange has joined #jruby
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
<headius[m]> Ok I'm at my machinenow
<headius[m]> the error in that report is a bit different
<headius[m]> no such file to load -- rails/railtie
<headius[m]> seems a problem finding the rails libs
shellac has quit [Ping timeout: 245 seconds]
<kares[m]> oh setAccessible keeps biting ... :(
<headius[m]> endless fun
<kares[m]> fyi: started helping out LogStash
<kares[m]> first thing I got is a JRuby thingy but I think this will need to be dealt on their end
<headius[m]> I haven't committed the tiny change yet but running tests locally now
<headius[m]> oooo nice, good for you
<kares[m]> basically Timeout.timeout having an overhead ... locking/contention
<kares[m]> but I do not see any good options improving at JRuby's
<headius[m]> hmm two failures with my patch
<kares[m]> ScheduledThreadPoolExecutor is the best there is
<headius[m]> timeout is ugly no matter how you slice it
<kares[m]> yy
<headius[m]> what we have should be lightweight unless the timeouts fire...may be possible to reduce some contention somewhere
<kares[m]> timeouts do not fire ... but there's contention adding new tasks
rusk has quit [Remote host closed the connection]
<kares[m]> there's a blocking queue we can not replace
<headius[m]> hmm
<headius[m]> yeah
<headius[m]> wait how many tasks are getting tossed into this thing?
<headius[m]> contention on timeout would have to mean hundreds or thousands
<kares[m]> yeah 100s or 100s
<kares[m]> * yeah 100s or 1000s
<kares[m]> basically wrapping every regexp with a timeout
<kares[m]> looking into reducing that - so that groups are wrapped
<kares[m]> still, was interested if you have thoughts ...
<headius[m]> yeah that would be my first though
<kares[m]> did some minor allocation improvements already that helped a bit
<kares[m]> but all in all seems like I won't do much in JRuby itself here ...
<headius[m]> it might be possible to configure our own scheduled thread pool executor that uses a lighter queue, maybe?
<kares[m]> unless someone else starts complaining about timeout contention :)
<headius[m]> yeah throwing thousands of jobs into an executor is heavy
<kares[m]> was actually looking into that
<kares[m]> using a ScheduledThreadPoll with a custom concurrent queue
<headius[m]> ok
<kares[m]> but the Scheduled pool impls relies quite a lot on its special queue
<kares[m]> so no easy way to go there ... except for writing our own pool
<kares[m]> + queue
<kares[m]> *
<kares[m]> * so no easy way to go there ... except for writing our own pool + queue
<kares[m]> poked around Java impls but ever old Android versions use Doug Lea's HotSpot ScheduledThreadPool impl
<headius[m]> hmm
<headius[m]> saw a SO post about this
<kares[m]> interesting ... actually do not need to have exact timeouts
<headius[m]> kares: may be better in a more recent JDK or JSR166 impl?
<kares[m]> but I also do not care about io for regexp matches
<kares[m]> not really
<kares[m]> actually felt like 11 is a bit worse than 8
<headius[m]> yeah might be something else out there but needs more research I guess
<headius[m]> maybe with that much timeout they need a regexp service
ezzeddine[m] has quit [Quit: User has been idle for 30+ days.]
cpuguy83 has joined #jruby
cpuguy83[m] has joined #jruby
<enebo[m]> kares: you say only to cancel long regexps?
<headius[m]> for anyone trying to benchmark on Linux this seems to be the source of massive default latency
<headius[m]> Java 10+ adds a socket option to enable QUICKACK
<headius[m]> otherwise I'm not sure how to work around this
<headius[m]> Red Hat apparently provides a system-wide flag to disable delayed ACK
<headius[m]> so I'm gonna spin up a RHEL instance for now
<headius[m]> or Fedora I support
<headius[m]> bleh why aren't there any default fedora AMIs on AWS
<headius[m]> whatever, I can download what I need on RHEL
<enebo[m]> headius: how do you delay it?
<enebo[m]> haha disabl;e
<enebo[m]> I want EXTRASLOWACK
<headius[m]> apparently most linuxes set this default 40ms delay for ACK as some ancient premature optimization to batch ACKs together
<headius[m]> but in this case it effectively force latency on every request to at least 40ms
<enebo[m]> yeah sounds like it was written about telnet and human response time
<enebo[m]> which is pretty bad if you are not a human
<headius[m]> so I get 200 requests/s instead of 50k+
<enebo[m]> if you read lower in that SO someone said MacOS did something else to alleviate the issue
<headius[m]> yeah
<headius[m]> wtf linux
<enebo[m]> but it really begs the question...Do all servers do this or are apps so slow most people do not notice
<enebo[m]> Let's face it returning a date as json as the payload is not much work
<headius[m]> oh man, this has to have affected peoples' impressions of JRuby vs MRI
<headius[m]> MRI does not seem to have the same problem but I can't confirm it sets QUICKACK
<enebo[m]> wait wot
<enebo[m]> you are saying MRI gets more the 200 r/s then?
<headius[m]> actually I didn't confirm on my instance but Noah had that result last spring
<headius[m]> and he also saw this exactly 40ms delay on JRuby
<enebo[m]> ah yeah so one mystery solved
<headius[m]> enebo: you are on fedora
<headius[m]> test this roda app
<headius[m]> I pushed a jruby repo version
<headius[m]> jruby/ruby_benchmarks in the roda_version dir, should bundle and startup fine with -e production
<headius[m]> damn centos doesn't seem to have this redhat global ack thing
<headius[m]> enebo: this could be reducing other numbers you've run too
<enebo[m]> puma and what wrk params?
<enebo[m]> and are you using indy?
<headius[m]> I was doing wrk -t8 -c8 -d10
<enebo[m]> so roda is not calling through https either then
<headius[m]> well no, there's no config for https in this
<headius[m]> indy won't matter for a two magnitude drop in perf
<headius[m]> Oliver tried on his Linux/Ryzen system and he can't get it to exceed half of one CPU usage
<enebo[m]> 1820 r/s atm
<enebo[m]> yeah weirdly noisy
<headius[m]> should be way higher than that
<headius[m]> you could try MRI, I think it will bundle ok
<enebo[m]> sure but that is 6x what you were getting so that is also weird
<headius[m]> well I was on EC2
<headius[m]> dunno how VCPU is translating
<enebo[m]> yeah I was just wondering that
<headius[m]> you have 4 real and 4 virt so that could easily make it up
<headius[m]> I think mine was 4 VCPU
<enebo[m]> Puma starting in single mode...
<enebo[m]> * Version 4.3.0 (jruby 9.2.10.0-SNAPSHOT - ruby 2.5.7), codename: Mysterious Tra
<enebo[m]> 8min/8max threads and 8 theoretical cores
<enebo[m]> although this is a laptop with crap running on it too
<headius[m]> blast this rhel instance doesn't have the global ack either
<enebo[m]> I don't recall but the rg.org bench was no where near this fast so it probably was not a factor for that
<headius[m]> enebo: seems like you are having same problem
<headius[m]> our CPUs should be roughly the same and I get 50k/s on MacOS
<headius[m]> and 40ms for this request would be absurdly long
<headius[m]> pass --latency to wrk and you'll see they're all 40ms
<enebo[m]> Latency 12.94ms 14.67ms 48.10ms 74.93%
<enebo[m]> must be average though
<headius[m]> --latency didn't give you a histogram?
<enebo[m]> well I have yet to run it
* enebo[m] sent a long message: < >
<enebo[m]> That is whtn it was 1378 r/s
<enebo[m]> 50% is less than 1ms now
<enebo[m]> I am also using Java 8
* headius[m] sent a long message: < >
<headius[m]> 800 connections was me trying to get around it but same result with 8
<enebo[m]> I am definitely not getting 40ms
<enebo[m]> at least not according to wrk
<headius[m]> your jruby isn't running dev mode or something is it?
<headius[m]> maybe this is something in cloud
<headius[m]> I challenge you to get 100% of all CPU
<enebo[m]> no but I am starting with indy
<headius[m]> something's not working on your system either
<enebo[m]> haha where the hell is the config files for wrk? I strongly suspect I am not using keep-alive which would explain the difference
<headius[m]> grep your dotfiles for "Close" maybe
<headius[m]> if I remember right it was Http-Connection: Close or something
<headius[m]> unsure about caps
<headius[m]> wrk has garbage for doco
<enebo[m]> Is it possible I am thinking of ab?
<headius[m]> maybe
<headius[m]> so there's a flag for wrk
<headius[m]> I think we used that to get wrk to act like ab
<headius[m]> trying to think now...I think we set a config in ab for http version?
subbu is now known as subbu|lunch
<enebo[m]> with connection close this jumps to 14k/s
<headius[m]> and now you'll hit socket starvation like AB
<headius[m]> I'm trying to hack JRuby to set quickack
<enebo[m]> well I wonder when though
<headius[m]> I mean it certainly might not be that but the 40ms thing is really suspicious
<enebo[m]> I have made about 3 million connections
<headius[m]> I would hit it with ab around every 20-30k connections
<headius[m]> it would stall
<headius[m]> it doesn't stop going but it stalls
<enebo[m]> It may be happening now...I saw a single read error that 30s run
<headius[m]> yeah
<headius[m]> that could be it
<enebo[m]> but I am hitting 400k-500k per 30s run
<headius[m]> I'm sure it varies system to system
<headius[m]> my 20-30k was on MacOS
<enebo[m]> linux seems more resillent
<headius[m]> there's a reason XServe never went anywhere
<enebo[m]> still 15k/s for way over 10 million requests
<headius[m]> still low
<headius[m]> but closer
<enebo[m]> with connection: close though
<headius[m]> what's your CPU utilization?
<headius[m]> true
<headius[m]> closing the connection may just ignore the ack delay
<enebo[m]> about 76%
<enebo[m]> I would expect a keep-alive to do much better
<headius[m]> yeah
<headius[m]> it would
<enebo[m]> non-keep alive at 15k does not seem bad in comparison
<headius[m]> 50k locally
<headius[m]> maybe more, I simplified that bench a bit since I got a high of 51k
<enebo[m]> but 45k could easily be just coping with creating new connections every time
<enebo[m]> I wonder if I am raising on puma with keep-alive on
<headius[m]> I did not disable keepalive though
<enebo[m]> yeah I mean I would expect you to get a much better result with keep alive
<enebo[m]> and you do
<headius[m]> oh ok
<enebo[m]> so we have even more questions :)
<headius[m]> yeah like 5x improvement over your results could easily be connection establishment
<enebo[m]> I am not seeing 40ms at all on FC29
<headius[m]> but you see how messed up your keepalive numbers were now
<enebo[m]> So I suspect this does not have that particular problem
<headius[m]> 10x slower
<headius[m]> than no keepalive
<headius[m]> nonsense
<enebo[m]> yeah but remember Puma was broken with keep alive
<headius[m]> not on MacOS
<headius[m]> seems weird
<enebo[m]> well but always on linux
<enebo[m]> it was one reason why we did non-keep alive plus it was way more realistic for benching
<headius[m]> that's not clear to me
<enebo[m]> it is much more likely to have n users than a single one
<headius[m]> well, realistic in some ways
<enebo[m]> neither are really ideal
<enebo[m]> you obviously would want a mix
<headius[m]> a given web hit usually will send a couple dozen requests over same socket
<enebo[m]> like 200 keep alive sessions
<headius[m]> no browser reconnects for every request on a given page
<enebo[m]> but age those and make new sessions occasionallyu
<headius[m]> yeah that would make sense
<headius[m]> dunno how to do that with these tools
<enebo[m]> I am just saying very few apps has one user banging away
<headius[m]> so simulate like 50-200 requests before connection closing
<headius[m]> yeah and I'm saying very few apps reestablish a new connection every time
<enebo[m]> This is a bit depressing how bad benching web apps is from a public realm
<enebo[m]> I have no doubts there are people internally at companies who script out sessions and realistically load their apps
<enebo[m]> but I mean where is the docs on how to do that
<enebo[m]> It is clear to me that is highly specialized domain and most people do not do that
<enebo[m]> but I do agree with you as well
<headius[m]> yeah
<enebo[m]> neither is realistic
<headius[m]> for purposes of benchmarking the web framework, though, keepalive makes more sense
<headius[m]> we're not benchmarking socket establishment
subbu|lunch is now known as subbu
cpuguy83 has quit [Remote host closed the connection]
cpuguy83 has joined #jruby
<headius[m]> hmm that's not helping me on my EC2 rig
cpuguy83 has quit [Ping timeout: 240 seconds]
subbu is now known as subbu|away
<kares[m]> enebo: yes pretty much interrupting long regexps
<headius[m]> maybe we need to figure out a way to timeout regexp based on some instruction limit 😟
<enebo> or we build in timeout with interuptible as a feature
<enebo> to joni
<enebo> Assume with that feature enabled there is some price to regexp which costs a bit more but much less than using timeout + interupting
cpuguy83 has joined #jruby
<headius[m]> there's not a lot of good options here
<headius[m]> MRI's timeout is dirt stupid...if they could interrupt regexp it would be a million times more overhead thanus
<headius[m]> than us
<enebo> I am suggesting an extended API for this which would bake timeouts into joni
<headius[m]> oh you know what
<headius[m]> stupid idea: timeout sets an end time whenit starts and joni checks that periodically
<headius[m]> no threads or scheduling involved
<headius[m]> joni kills itself
<enebo> yeah
<headius[m]> my first thought when you suggested joni do it was "wouldn't it be just as bad if joni used a scheduled pool"
<headius[m]> that's not a bad idea
<headius[m]> it would mean more API expansion, to support both thread timeout and "finish before" timeout
<enebo> I was thinking in comparison to timeout a nanotime every so often (probably occasional backedge instr check)
<headius[m]> this lib will have everything soon
<enebo> we can also completely separate the engine for this if it somehow complicates the common path one too
<enebo> lopex already has multiple instr engines in it
<headius[m]> that's true
<headius[m]> lopex: ping!
subbu|away is now known as subbu
<headius[m]> kares: this might be a path
<headius[m]> no timeout at all
<headius[m]> I mean no timeout lib...you'd have to do .match(str, millis) or something
<enebo> yeah
<headius[m]> enebo: that's another wrinkle since it would necessitate a custom API
<enebo> yeah but I am ok with providing a JRuby-specific API and then submitting a API req to MRI to add it
<enebo> I am sure logstash will do it if it massively improves perf
<enebo> which if you are submitting 1000s of threads for this it should be mighty
<kares[m]> interesting
<headius[m]> kares: sort of like an instruction limit but not as wonky
<kares[m]> do not want to hard push this ... unless of course there's a use-case (benefit) for others as well
<kares[m]> and it doesnot effect performance
<headius[m]> enebo: why did interrupt checking slow things down? Because we're pinging it periodically?
<kares[m]> yeah there's checking through the loop
<kares[m]> for interrupts
<headius[m]> I'm trying to remember why we didn't just add an interrupt-check instruction and only compile that in when you ask for interruptibility
<headius[m]> maybe we couldn't figure out where the instruction should go
<kares[m]> that would be interesting
<enebo> It was more than that
<enebo> it was not the joni loop which was the issue as much as it was us wrapping that joni run in something to catch the interrupt
<enebo> so we fast pathed around the stuff setting up the try catch but then also made a more obvious fast and slow path into joni
<enebo> It is possible we could reduce the interrupt cost by more analysis in which back edge ish crap exists and check less
<enebo> but it is a natural place to do a time comparison as well
<enebo> I don't fully remember all the details though
<headius[m]> yeah back edge would be great if that's something we can determine
<headius[m]> so we add a timeout or interrupt ping on backedge only if requested and then non-timeout is unaffected
<enebo> This is not really a priority for me next couple of weeks but lopex definitely knows how to add this
<enebo> Optimizing both features to be better than it is today for interruptable may be in the cards but we can get a quick win just adding in the timing check to where we have interupt code and just add a new path into joni
<headius[m]> yeah indeed
<headius[m]> kares: like enebo says not a high priority for us (maybe no priority) but it's a possible path forward if other options fall apart
<headius[m]> driven by Elastic obviously
<headius[m]> Jordan added the interrupt stuff initially I believe
<lopex> add what ?
<enebo> lopex: explicit timeout: time option to joni
<enebo> lopex: nanotime or something at same place as interupt check to exit if too much time has passed
<lopex> ah, yeah
<enebo> lopex: logstash is making potentially 1000s of timeouts around regexps and that is obviously really expensive
<enebo> kares[m]: lopex: I think if you guys want to add this it is a great idea. I am just busy for rubyconf prep atm. Bonus for considering adding an RFE to ruby-lang as well.
<headius[m]> yeah this would might be motivation for them to add some sort of timeout too
cpuguy83 has quit [Ping timeout: 265 seconds]
<headius[m]> the way we have it now I'm not sure they can do it because it would require onigmo to know about MRI internals more than it does now, but just adding an end time to onigmo might be easier
<lopex> how expensive is nano though ?
<lopex> it would have to be read every threashold we have for interrupt right ?
<enebo> lopex: much less expensive than setting up a thread and interrupting
<enebo> lopex: I do not understand that question
<enebo> lopex: I do not remember how often we check interrupt but it need not be once per ip in the interpreter of joni...
<lopex> so, yeah, every that amount we have for interrupts right ?
<lopex> or more often ?
<enebo> lopex: no I do not think this need to be all that precise
<enebo> lopex: I would be inclined to do it in same place as interrupt check for starters and we can maybe analyze if we can do something more crafty later
<enebo> lopex: if you add it there and nanotime is obnoxiously expensive then I guess it will make us consider how good a feature it is OR look into a way of checking less often
<lopex> not sure how up to date it is
<enebo> lopex: bah who cares...just use it in same place we check and see if you can even tell
<enebo> out default is more or less every 30k joni instrs
<enebo> I do wonder now that I have not thought about this if we could identify the back edges of joni instrs
<enebo> and I am not sure what backedges are in joni instr but places where it leads to backtracking would be an obvious place
<lopex> so pushes...
<enebo> An infinite string with a simple kleene match would never be checked but I doubt something like that is realistically ever an interuptible case
<enebo> could be...I am not familiar with the instr set past these every one year conversations we have :)
<lopex> pushAlt to be more specific
<enebo> lopex: but for now I am curious what impact can be seen from even checking nanotime every 30k instrs
cpuguy83 has joined #jruby
<headius[m]> I thought there had been some improvements to this nanotime thing but I don't recall
<headius[m]> I remember discussions years ago about what that article discusses, specifically monotonicity across threads (or lack of)
<headius[m]> if I remember right nanotime was really slow at some point because it actually tried to sync threads
<headius[m]> pretty sure Cliff Click did a talk about it
<headius[m]> Cliff or Gil
cpuguy83 has quit [Ping timeout: 240 seconds]
<headius[m]> in any case this is a cost Elastic would opt-into if we could make it an instruction
<headius[m]> I was just trying to remember why we didn't do that for the thread ping
<enebo> we did not use it once because of VMs pausing
<enebo> but an interupt on a regexp paused while a VM is paused may just be the tradeoff you accept
<headius[m]> yeah we're not talking about timeouts in the minutes or hours
<headius[m]> if someone closes the lid while your 500ms regexp runs I think you accept that it's going to time out
<headius[m]> or not
<headius[m]> but won't be predictable
<enebo> yeah it may never interrupt but that's life
<headius[m]> `QueryPerformanceCounter(¤t_count);`
<headius[m]> what syntax is that 😳
<headius[m]> from that article
<enebo> heh we could add a second conditional looking to see if nanotime < savedchecktime
cpuguy83 has joined #jruby
<enebo> and interupt because it paused and then came back with something completely different
<headius[m]> yeah that would be the simple way
<headius[m]> so it starts out with (string, timeout) and interrupts if nanoTime exceeds startNanoTime + timeout
<enebo> yeah or somehow ends up less than start
<enebo> OR we reset it to start
<enebo> in any case that is way too corner case to worry about for this
<headius[m]> yeah nanoTime must be within [startTime, startTime + timeout]
<headius[m]> heh yeah
<enebo> in the use of JIT threshold it mainly just means nothing will jit from that point forward
<headius[m]> anyway I assume kares will communicate this back as another path forward
<headius[m]> I just want to figure out this blasted 40ms thing
<enebo> and I am not positive we can see it be smaller or not.
<enebo> yeah me too
<headius[m]> ugh, search for jvm and latency and every article is about GC
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
hosiawak has quit [Ping timeout: 240 seconds]
hosiawak has joined #jruby
hosiawak has quit [Ping timeout: 276 seconds]
hosiawak has joined #jruby
hosiawak has quit [Ping timeout: 265 seconds]
hosiawak has joined #jruby
hosiawak has quit [Ping timeout: 240 seconds]
hosiawak has joined #jruby
lucasb has quit [Quit: Connection closed for inactivity]