enebo[m] has joined #jruby
lopex[m] has joined #jruby
kai[m]1 has joined #jruby
RomainManni-Buca has joined #jruby
kroth_lookout[m] has joined #jruby
TimGitter[m] has joined #jruby
kares[m] has joined #jruby
KarolBucekGitter has joined #jruby
liamwhiteGitter[ has joined #jruby
hopewise[m] has joined #jruby
byteit101[m] has joined #jruby
nhh[m] has joined #jruby
rdubya[m] has joined #jruby
BlaneDabneyGitte has joined #jruby
ahorek[m] has joined #jruby
JesseChavezGitte has joined #jruby
UweKuboschGitter has joined #jruby
FlorianDoubletGi has joined #jruby
XavierNoriaGitte has joined #jruby
dentarg[m] has joined #jruby
OlleJonssonGitte has joined #jruby
ravicious[m] has joined #jruby
codeponpon[m] has joined #jruby
headius[m] has joined #jruby
daveg_lookout[m] has joined #jruby
slonopotamus[m] has joined #jruby
TimGitter[m]1 has joined #jruby
JulesIvanicGitte has joined #jruby
jpsikorra[m] has joined #jruby
boc_tothefuture[ has joined #jruby
MattPattersonGit has joined #jruby
CharlesOliverNut has joined #jruby
GGibson[m] has joined #jruby
MarcinMielyskiGi has joined #jruby
sagax has quit [Ping timeout: 256 seconds]
subbu has quit [Excess Flood]
subbu_ has joined #jruby
subbu_ is now known as subbu
subbu is now known as Guest91473
Guest91473 is now known as subbu
den_d_ has joined #jruby
den_d has quit [Ping timeout: 256 seconds]
den_d_ is now known as den_d
lopex[m] has quit [Quit: Idle for 30+ days]
lopex[m] has joined #jruby
sagax has joined #jruby
ruurd has quit [Quit: ZZZzzz…]
souravgoswami[m] has joined #jruby
<souravgoswami[m]> Hello!
<souravgoswami[m]> Hope you guys are doing good...
<souravgoswami[m]> I have a problem... I have a rubygem called LinuxStat, that has a lot of C extension
<souravgoswami[m]> While I am trying JRuby to install that for the first time, I am getting a tonne of errors
<souravgoswami[m]> I don't know what is development package apart from ruby.h header file, that I already have.
<souravgoswami[m]> * I don't know what is JRuby development package apart from ruby.h header file, that I already have.
ruurd has joined #jruby
<travis-ci> jruby/jruby (master:9bbd59f by Benoit Daloze): The build was broken. https://travis-ci.com/jruby/jruby/builds/215117480 [202 min 49 sec]
travis-ci has left #jruby [#jruby]
travis-ci has joined #jruby
<headius[m]> souravgoswami: hello!
<headius[m]> Unfortunately JRuby does not support C extensions because they are fundamentally unsafe
<souravgoswami[m]> Oh got it
<souravgoswami[m]> This is my gem, it calls Linux API
<souravgoswami[m]> So I don't think there's any way to do that in pure Ruby
<headius[m]> have you considered using FFI?
<headius[m]> you can call almost any C function just using Ruby
<souravgoswami[m]> Actually no, it directly uses C extension...
<headius[m]> You should be able to map these functions into Ruby using FFI and then it would work anywhere without an extension
<souravgoswami[m]> Got it... Thanks a lot for the suggestion!
<headius[m]> Sure! Here is the FFI wiki to get you started: https://github.com/ffi/ffi/wiki
<souravgoswami[m]> Got it... thanks ;)
<souravgoswami[m]> Doens't it slow down the code if you use ffi? For example, if I call some X function in C from FFI, I call a ruby method, which in c is like rb_funcallv_public(VALUE recv, ID mid, int argc, VALUE *argv); so am I not sacrificing performance by a little bit? Things like while loops and other can't get a little bit slowdown if I use ffi?
<headius[m]> if you wrap a C function call in a C extension, and someone calls that extension from Ruby, you still have the cost of making that Ruby call anyway
<souravgoswami[m]> Yes, but I am not doing that
<headius[m]> there is overhead for using FFI, yes, but depending on what you are calling it may not be an issue
<souravgoswami[m]> What I generally do is calculate stuff in C, and then cast it to VALUE
<souravgoswami[m]> whatever it returns as VALUE...
<headius[m]> if you are using C extensions to get faster performance than writing in Ruby, you would lose that using FFI (though Ruby gets faster and faster and it will do very well on JRuby)
<souravgoswami[m]> That's true
<headius[m]> a halfway approach would be to just write your C extension as a normal C library with C types and then call that from FFI
<headius[m]> so you can keep C performance for the operations the library does, but use FFI to connect it to Ruby rather than a C extension
<souravgoswami[m]> I actually think libraries should care about performance, because your user might do a billion loops, and you should waste as minimum resources as possible...
<souravgoswami[m]> FFI adds a lot of possibilities, but it's not for building some libraries I think
<souravgoswami[m]> Because what I have seen is that calling each method takes some time in Ruby. But in C, you have freedom, you can also use -O3 optimization, -march and mtune native, etc. With FFI I am not sure if you can do such stuff to build your code...
<headius[m]> yes you do get those benefits but you lose compatibility across Ruby implementations
<souravgoswami[m]> 😅😅
<headius[m]> if your goal is to be able to support JRuby with your library then FFI is really the only way
<headius[m]> that halfway approach might be the best option
<souravgoswami[m]> Yes... I understand... It adds possibility...
<headius[m]> see the sassc gem for example which ships as C sources... it builds the library on install and uses FFI to drive it
<headius[m]> works great on JRuby
<headius[m]> it does not use Ruby C ext API at all
<souravgoswami[m]> Yes, sure... But some of my gems (big_pie, string_dot_levenshtein) are written for speed, even I cared about microsecond delay... Went with lots of pointers to optimize! It is reliable though and very fast
<souravgoswami[m]> So FFI will slow it down...
<souravgoswami[m]> In sassc it doesn't matter a lot I think...
<headius[m]> well in the halfway approach FFI just replaces the use of Ruby C ext API
<souravgoswami[m]> haha yeah...
<headius[m]> C isn't the problem... the problem is the C ext API which is very specific to CRuby
<souravgoswami[m]> Yup, I knew that, but tried giving JRuby a try... Because JRuby is written in Java it probably makes no sense it will compile native extensions and run them without creating subshells or new processes...
<souravgoswami[m]> Anyway, very nice talking to you... Have a good day!...
<headius[m]> sure, you too!
<Freaky> headius[m]: there it goes again... on the one instance I have that *doesn't* have full backtraces enabled ;)
<Freaky> maybe they fix it ;)
<Freaky> they're a pain to develop with, be nice to have an abridged and a full backtrace in the same error
<headius[m]> yeah when you see one you'll understand why we don't have it on normally
<headius[m]> there are dozens of Java stack frames for every one Ruby frame
<Freaky> hmm
<Freaky> and again
<Freaky> it seems to happen early or not at all
<Freaky> which suggests JIT weirdness to me
<headius[m]> Yeah that could be
<headius[m]> For what it's worth that backtrace setting affects nothing but how we construct the exception
<headius[m]> Hmm
<headius[m]> If it is a jit thing we might be able to trigger it by repeatedly parsing
<headius[m]> Freaky: point me at the json that failed again please?
<Freaky> https://gist.github.com/Freaky/5b5a9af086fbbc2395e1a0f1d602a735 shorter more convenient one that I just had
<Freaky> bam
<Freaky> and again
<Freaky> why is it easier to reproduce today
<headius[m]> Thursdays amiright
<Freaky> and tmux just crashed
<Freaky> this machine has ECC memory, I swear ;)
ur5us has joined #jruby
<headius[m]> hmmm
<fidothe> Hey all. I have a stupid threading question. Given a method like `def m(arg); thing.x(arg); end` is it possible that calls to that method from different threads could result in return values going to the wrong (or two) threads?
<Freaky> headius[m]: heh
<Freaky> JSON.parse('{"took":37,"timed_out":false}')
<Freaky> I threw that into it to see if it did anything
<Freaky> unexpected token at '{"took":37,"timed_out":false}'
<fidothe> I'm seeing something like that happening in some production code and I'm just not sure if my understanding of what is vulnerable to concurrency leakage
<fidothe> not sure if my understanding is correct
<Freaky> fidothe: depends what thing.x does
<fidothe> it shouldn't do anything with side effects
<headius[m]> the json lib should be fine but the strings passed in could be subject to concurrency issues
<headius[m]> if the string isn't being concurrently modified though I would not expect there to be any json concurrency issue
<Freaky> headius[m]: it literally errored on a frozen string literal
<headius[m]> fidothe: oh you weren't asking about json
<headius[m]> no that should not be possible
<fidothe> I'm pretty good at screwing up in weird ways
<headius[m]> fidothe: the return value should be storing on the call stack somewhere local to the thread
<Freaky> fidothe: this is a requirement for being a programmer
<headius[m]> no chance of concurrency messing with it
<fidothe> the code is actually calling out to a java library
<headius[m]> if you are seeing something changing due to concurrency it would be outside that code
<fidothe> where I guess it's possible there's some synchronisation problems
<headius[m]> Freaky: that is helpful
<fidothe> need to isolate exactly what is being changed where
<headius[m]> Freaky: what version of json lib are you using
<fidothe> thanks anyway, good to have my basic assumption validated
<headius[m]> I tried locally on a HEAD build of 9.2.x and it was ok with the available json
<Freaky> 2.5.1-java
<headius[m]> fidothe: yeah good luck!
<headius[m]> feel free to ask again if you narrow down the issue
<headius[m]> Freaky: hmm that is what I have locally
<Freaky> I haven't been able to reproduce it on a minimal test case yet, only on this web app backed with postgres, redis and elasticsearch, which is a bit unwieldy ;)
<headius[m]> show me jruby -v
<headius[m]> oh you didn't repro this on a standalone JRuby run
<Freaky> jruby 9.2.14.0 (2.5.7) 2020-12-08 ebe64bafb9 OpenJDK 64-Bit Server VM 15.0.2+7-1 on 15.0.2+7-1 +jit [freebsd-x86_64]
<fidothe> headius[m]: I will. I think I can do a lot to figure out exactly which thing is wrong and then I imagine the wailing and gnashing of teeth will begin and I'll be back...
<headius[m]> Freaky: ok
<headius[m]> looking at the java code it looks like that full trace error is raised because it gets to the end of the parse and there's still characters
<Freaky> right
<headius[m]> given that it complains about the token being at the beginning I suspect encoding issue
<headius[m]> it is parsing it based on some encoding, perhaps multibyte, and not seeing the characters it expects to parse
<headius[m]> so it walks the whole thing and sees no json and blows up
<headius[m]> next thing to log would be the encoding of the string right before the parse call
<headius[m]> does your system deal with strings in any other odd encodings?
<Freaky> no
<Freaky> like I said, static frozen string
<Freaky> it's parsed before
<Freaky> it parses several times before blowing up
<headius[m]> ok hmmm
<Freaky> grr
<Freaky> got a test case with PrintCompilation but having problem copying the text ;)
<headius[m]> hah
<headius[m]> what JVM version?
<Freaky> 26 kilobyte long line
<Freaky> as it says in the version above
<Freaky> openjdk15-15.0.2+7.1
<Freaky> compiled from FreeBSD ports
<Freaky> and another tmux crash
<headius[m]> hmmmm
<headius[m]> oh no
<headius[m]> I see a bad thing
<headius[m]> potentially
<headius[m]> the Java Parser instance holds a reference to the source string
<headius[m]> if that instance is being reused with different strings it would not be threadsafe
<headius[m]> hmmm JSON.parse does not reuse it though
<headius[m]> new instance for each parse
<Freaky> 26634 14095 4 json.ext.Parser$ParserSession::parseObject (962 bytes) made not entrant
<Freaky> just before the exception
<Freaky> multiple cases of that in the log, they bouncing in and out of JIT cache or something?
<headius[m]> deopt
<headius[m]> not sure if that is a clue or not but interesting that it happened right before an error
<headius[m]> it could also be the JIT deoptimizing because of the error being raised though
<headius[m]> I am looking into this RuntimeInfo class in the json Java ext... it is caching objects from the JRuby runtime statically and the code scares me
<Freaky> http://voi.aagh.net/tmp/freaky-jruby-json-error.txt full log of several requests
<headius[m]> it tries to separate the cache by JRuby runtime but I'm not sure it is doing it safely
<headius[m]> separate per runtime I mean
<headius[m]> Freaky: how is this deployed?
<headius[m]> if it is just jruby running at command line the runtime thing is not going to be the problem
<Freaky> installed via rbenv, running ruby -GS puma
<headius[m]> ok so there will only be one JRuby instance
<Freaky> yeah, nothing fancy
<headius[m]> -G wow you are the first person I have ever seen use it
<Freaky> what, I'm going to run bundle exec like some kind of savage? ;)
<headius[m]> I know right
<headius[m]> hmmm
<headius[m]> `parseImplemetation`
<headius[m]> nice spelling
<headius[m]> Freaky: let's open a bug on json and continue exploration there
<headius[m]> it could still be a concurrency issue on your end or a bug in JRuby but I am leaning toward it being something odd in the json ext
<headius[m]> I am trying to understand the parser to see how it could reject a whole string
<Freaky> it's interesting that since I added that static JSON.parse, every time I trigger it, it's on that and not on the dynamic parse in the elasticsearch query after it
<headius[m]> JSON.parse('') produces the same error
<headius[m]> Freaky: that is interesting
<headius[m]> er wait... it raises an error though and doesn't run the query right?
<Freaky> right
<Freaky> takes a few requests to trigger it, so it parses successfully a few times, then blows up and stops before it hits elasticsearch
<headius[m]> ah but you are saying it never fails on the query now
<Freaky> could just be a fluke
<headius[m]> ok yeah I see in the ext that it is basically rejecting the whole string
<headius[m]> damn, if we could reproduce this I could just set a breakpoint and inspect the data
<headius[m]> you can turn on JRuby JIT logs with -Xjit.logging btw but I have no evidence that our jit is at fault either
<Freaky> there we go, error from elasticsearch
<headius[m]> ok so just lucky that the others failed only on the short parse
<headius[m]> I will say that if that static string is failing that pretty much guarantees this is something on the json side
<Freaky> I'll try to reproduce it in openjdk 8
<headius[m]> yeah good call
<headius[m]> I am taking that simple json parse example that fails and running it a bunch
<Freaky> I'm loading the server and bouncing from the first and last page of a small repository a few times, then restarting
<headius[m]> ok I have a script running that small parse across 10 threads, 20k times each, and then I loop executing that script from scratch in a new process
<headius[m]> I will let this chew a bit and brb
<Freaky> while this is threaded, it's just a dev server, it's only me making sequential requests
<headius[m]> That's interesting
<headius[m]> Number of requests before it fails is unpredictable?
<headius[m]> At least we are narrowing it down
<Freaky> seems to happen early or never
<Freaky> first dozen or so
<Freaky> couldn't reproduce with a few minutes on 8, but now I can't having spent longer back in 15 ;)
<Freaky> there
<Freaky> 7 requests
<Freaky> static string
<headius[m]> I should be running this on 15
<Freaky> I wonder if timing is a factor, since every time I try to script a test it doesn't find anything
<headius[m]> Turn on gc log
<headius[m]> -XX:+PrintGCDetails
<headius[m]> I doubt this is GC related but it's one of those things that happens on its own schedule
<Freaky> heh
<Freaky> that makes string dedup really slow
<headius[m]> I suppose an alternative would be configuring a different GC
<headius[m]> Assuming you are not changing the defaults it will be running the g1 collector... The parallel collector is much faster for small heaps
<headius[m]> It used to be... They have been improving g1
<Freaky> -J-Xmx16G -J-XX:ReservedCodeCacheSize=128m -J-XX:+UseG1GC -J-XX:+UseStringDeduplication
<headius[m]> Pretty big
<headius[m]> Parallel would be worth a shot to see if the problem remains
<Freaky> G1 seems fairly good at keeping its usage under control
<headius[m]> I doubt string dedup is related but would be good to confirm that too
<headius[m]> Yeah I just mean in case there's some GC oddity affecting this
<headius[m]> At that size heap g1 is probably the better choice in production
<Freaky> yeah
<Freaky> see if I can reproduce it with the log, and if it's associated with an eviction
<Freaky> difficult to rule things out because I keep having big gaps where I can't trigger it
<headius[m]> Hmmm
<headius[m]> Could have you attach a debugger or open up a port for me
<headius[m]> We can break right on that error line and see what's going on with the string
<Freaky> hmm, happened there, second request after a GC
<Freaky> never used a java debugger, but happy to poke
<headius[m]> Simplest would be to use jdb, I think we can set up a single command line that will break at the right point
<Freaky> yeah, trying to work out how to spell that, got a debugger attached
<headius[m]> I have to run for a while, but I can help with debugging a bit later or tomorrow
<headius[m]> Important thing to look at would be the state of the bytelist field at the point of error
<headius[m]> In the Parser session object, json.ext.Parser.ParserSession.parserImplementation() method I think
<headius[m]> bbl
<Freaky> thanks
<byteit101[m]> headius: A gentle reminder that I'm waiting for responses before I can finish the reified constructors for concrete java classes PR
<headius[m]> byteit101: ah right it is on my todo list for tomorrow then
<byteit101[m]> No worries if you are busy, I just realized it's been over a month :-)
<byteit101[m]> Figured I'd give a ping
<headius[m]> byteit101: so last two comments on the PR?
<headius[m]> the list of questions and the diagram
<byteit101[m]> Yes, Dec 12th, and 18th posts
michael_mbp has quit [Ping timeout: 240 seconds]
michael_mbp has joined #jruby
ur5us_ has joined #jruby
ur5us has quit [Remote host closed the connection]
sagax has quit [Remote host closed the connection]