chli has joined #jruby
chli has quit [Ping timeout: 240 seconds]
GitHub74 has joined #jruby
<GitHub74> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwzE
<GitHub74> jcodings/master fdcca4b Marcin Mielzynski: add Windows 1253/1254/1257 encodings
GitHub74 has left #jruby [#jruby]
GitHub84 has joined #jruby
<GitHub84> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwzz
<GitHub84> jcodings/master 1cb4a0e Marcin Mielzynski: update generation stript
GitHub84 has left #jruby [#jruby]
GitHub142 has joined #jruby
<GitHub142> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwzo
<GitHub142> jcodings/master 11e3f1a Marcin Mielzynski: transcoder template no loger needed
GitHub142 has left #jruby [#jruby]
drbobbeaty has joined #jruby
Puffball has quit [Ping timeout: 248 seconds]
Puffball has joined #jruby
dave____ has joined #jruby
dave____ has quit [Ping timeout: 265 seconds]
GitHub140 has joined #jruby
<GitHub140> jcodings/master 518b28f Marcin Mielzynski: update transcoder configuration list
<GitHub140> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwVS
GitHub140 has left #jruby [#jruby]
GitHub169 has joined #jruby
<GitHub169> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwV7
<GitHub169> jcodings/master a45e233 Marcin Mielzynski: update generation script
GitHub169 has left #jruby [#jruby]
GitHub199 has joined #jruby
<GitHub199> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwVj
GitHub199 has left #jruby [#jruby]
<GitHub199> jcodings/master a94c7b0 Marcin Mielzynski: remove old scripts
GitHub16 has joined #jruby
<GitHub16> jcodings/master d721b1f Marcin Mielzynski: add Encoding.caseMap stub
<GitHub16> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwoL
GitHub16 has left #jruby [#jruby]
rrutkowski has quit [Ping timeout: 248 seconds]
chli has joined #jruby
rrutkowski has joined #jruby
rrutkowski has quit [Quit: rrutkowski]
chli has quit [Ping timeout: 272 seconds]
dave____ has joined #jruby
dave____ has quit [Ping timeout: 272 seconds]
GitHub92 has joined #jruby
<GitHub92> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwFB
<GitHub92> jcodings/master 9cb42f5 Marcin Mielzynski: Basic properties should have low indices (ctypes)
GitHub92 has left #jruby [#jruby]
shellac has joined #jruby
olle has joined #jruby
GitHub96 has joined #jruby
<GitHub96> jcodings/master 0af3c4d Marcin Mielzynski: add test for unicode property
<GitHub96> [jcodings] lopex pushed 1 new commit to master: https://git.io/vbwb5
GitHub96 has left #jruby [#jruby]
shellac has quit [Quit: Computer has gone to sleep.]
claudiuinberlin has joined #jruby
jeremyevans has quit [Ping timeout: 248 seconds]
jeremyevans has joined #jruby
shellac has joined #jruby
shellac has quit [Client Quit]
shellac has joined #jruby
vtunka has joined #jruby
olle has quit [Quit: olle]
shellac has quit [Quit: Computer has gone to sleep.]
vtunka has quit [Read error: Connection reset by peer]
vtunka has joined #jruby
shellac has joined #jruby
shellac has quit [Quit: Computer has gone to sleep.]
dave____ has joined #jruby
<reto_> sorry for the repost, but I think headius missed it yesterday :)
<reto_> headius / enebo: sorry, I didn't see your messages yesterday, I pushed the script to git@github.com:retoo/adventofcode-2017.git
<reto_> ruby advent13-2.rb < advent13-quiz-input # cruby takes about 12s overall
<reto_> /opt/jruby-9.1.15.0/bin/jruby --server advent13-2.rb < advent13-quiz-input # takes about 28s
<reto_> MRI: ruby 2.1.3p242 (2014-09-19 revision 47630) [x86_64-darwin13.0]
<reto_> I remember that Jruby outperformed mri within a few seconds after the startup
dave____ has quit [Ping timeout: 240 seconds]
Antiarc has quit [Ping timeout: 250 seconds]
Antiarc has joined #jruby
clayton has quit [Ping timeout: 264 seconds]
dave____ has joined #jruby
dave____ has quit [Ping timeout: 265 seconds]
clayton has joined #jruby
Cu5tosLimen has joined #jruby
<Cu5tosLimen> hi
<Cu5tosLimen> I am having some issue with asciidoctorj
<Cu5tosLimen> for some reason some module it requires seems to just be missing
<Cu5tosLimen> but I'm not sure why
<Cu5tosLimen> can I somehow list all loaded ruby modules?
shellac has joined #jruby
olle has joined #jruby
<olle> if you want to poke around, debugging, perhaps $LOADED_FEATURES tells you something.
vtunka has quit [Quit: vtunka]
dave____ has joined #jruby
bbrowning is now known as bbrowning_away
dave____ has quit [Ping timeout: 240 seconds]
shellac has quit [Quit: Computer has gone to sleep.]
<Cu5tosLimen> olle, thanks
<headius> reto_: oops, indeed I did...but it was still in my buffer
<headius> looking now
<headius> Cu5tosLimen: I saw some of your message...maybe there's an asciidoctor gitter?
<Cu5tosLimen> yeah there is
<Cu5tosLimen> but they won't debug it for me :)
<headius> ok, well we will try to help
<Cu5tosLimen> anyway is ok - I'm still digging
<Cu5tosLimen> thanks
<Cu5tosLimen> I did print $LOADED_FEATURES and that is not the issue
<headius> I saw in #ruby you said it works the first time?
<Cu5tosLimen> headius, yeah it is rather complicated - I'm using asciidoctorj in gradle via asiidoctor-gradle-plugin - and gradle runs as a daemon - so first time I run it after daemon starts it find asciidoctor-diagram extentions - the next time it does not
<Cu5tosLimen> I think its related to how the caching is done for asciidoctor objects
<Cu5tosLimen> but I'm not sure
<Cu5tosLimen> anyway I think I'm getting closer :)
<headius> ok
<headius> I really know nothing about asciidoctor internals
<Cu5tosLimen> yeah sure I'm just a complete noob with ruby and jruby so just here to get some details on that if I'm missing something
<headius> ok
<Cu5tosLimen> I think I got it actually
<Cu5tosLimen> thanks for help
<Cu5tosLimen> will take it up with asciidoctor people further
<Cu5tosLimen> they are wiping out the extentions for some reason after running asciidoctor
<Cu5tosLimen> actually explicitly
<reto_> headius: no worries, it isn't urgen. .just kinda confusing
<reto_> jruby was always faster :(
<headius> yeah it looks like it's the break in the each based on a quick profile
<headius> that appears to be where the JVM spends a lot of time
<reto_> hmmmm
<reto_> but why?? )
<headius> we must use an exception to unroll the block and the call to each, and that's more costly than simply returning
<headius> this comes up from time to time but usually break isn't as hot as it appears to be here
<headius> note that a break from a non-block loop (while, etc) is just a branch and doesn't use an exception
<reto_> mhm
<headius> a quick change of the script to do that makes it almost 10x faster on jruby
<headius> but that's not ideal
<reto_> i tried it with a method call, but thats not better
<reto_> (method call that returns intead of iteratin with break)
<headius> you can use find
<headius> that helps a lot
<headius> almost as fast as no block at all
<headius> all_slots.find ..... and then "next true" instead of break
<headius> perhaps a cleaner way to return true from the block but that works, and we're about 2x MRI then
<headius> I mean 2x fast
<reto_> :)
<reto_> :)
<headius> this will be improved once we finally land some inlining...the each and the break will all be in the same method and no exception needed
<headius> if people would stop reporting bugs :-D
<reto_> down to 814ms :) nice
<reto_> find/any both work
shellac has joined #jruby
<headius> ah yes
<reto_> okay, thanks for the explanation
<headius> thanks for the script!
<reto_> no worries, advent of code, quite fun little puzzle
<reto_> s
<enebo> oh hmm
<enebo> this is a proc or a lambda?
<headius> just an each with a block
<headius> I'll post the script
<enebo> headius: reto_: I believe along side inlining (which will help a lot in its own right) is that we emit all logic in blocks for both procs and lambdas and that creates quite a bit of overhead in this non-local stuff
<enebo> headius: I have it
<headius> that's the modified version
<enebo> I guess I could look at it though
<headius> the JVM sampled profile is 96% ExceptionBlob
olle has quit [Quit: olle]
<enebo> headius: ok
<headius> which is an opaque catch-all they use for JVM-level work unrolling
<headius> this relates to me tweet :-)
<headius> I looked it up and found posts from years ago where I asked John Rose to help
<headius> inlining more can eliminate it
<enebo> I am just wondering how much this is also our catchall IR for blocks
<enebo> My mind never keeps straight which one is simpler for non-local
<headius> if we have too many layers of exception handling it could be adding overhead, maybe
<enebo> but we emit a exc_receive in all blocks whether they are needed or not
<enebo> it also is why return from blocks is really expensive still
<enebo> since it has to wind through this extra section
<enebo> It probably complicated the bytecode generated
shellac has quit [Quit: Computer has gone to sleep.]
<headius> maybe
<headius> let's see
<headius> well I can see I have never looked at reducing bytecode size for binding pop
<headius> perhaps I'll spend today doing some JIT cleanup
<enebo> yeah
<enebo> we have this extra GEB crap in case it is a lambda
<lopex> dark matter ?
<enebo> lopex: known dark matter
<enebo> We have plans to live replace proper version based on which type it is
<enebo> but this is somewhat complicated
<enebo> as the same scope can be both
<headius> huh, I've gotten so used to ir.print output
<enebo> and the lifecycle of changing from one form to the other is complicated by JITing blocks
<headius> I need to fix that to not start printing until after boot
<enebo> That output just prints out a single version which in most my cases is not adequate
<enebo> in this case it would have been though :)
<enebo> I am not saying this is most of the problem but it is part of it for sure
<enebo> the proc { return 2 } sort of code is slower
<headius> well, it can print any version
<enebo> I just meant as written it does not write all passes out
<headius> well, if you like the output we can hook it up those places too...I just didn't change them
<enebo> A lot of debugging is tracing the changes which reminds me I need to get IGV working a lot better
<headius> anyway I'm looking at the full version no
<enebo> yeah I don't like some of your output although it looks much better visually than this
<enebo> this is last pass output
<enebo> but in this case it does not matter
<headius> what code was that?
<enebo> advent-13-2.rb
<enebo> original reto_ code
<headius> ok
<enebo> fwiw even the find version has this extra crud
<enebo> so perhaps it is just a complicated of a throw actually happening
<enebo> complication
<enebo> oh haha weird...find does next as well
<enebo> ah I see break vs next
<headius> huh, I wonder why my version has the CFG in this order
<enebo> It may be this is catching the break in an interesting way in case it is a lambda but perhaps next is not
<enebo> you alpha sort
<headius> next doesn't use an exception
<headius> I don't sort at all as far as I remember
<headius> and it isn't that order anyway
<enebo> yeah true
<enebo> here yours may be this order in an earlier pass
<headius> my order matches logical order it seems...the other output seems to have prepare arg at the end?
<headius> yeah that's a good point
<headius> jit passes
<chrisseaton> headius: do you have any plans to experiment with the SVM now it's open source? I wonder how far you could get in compiling JRuby.
<headius> chrisseaton: certainly could, but the lack of reflection is still a problem, no?
<chrisseaton> headius: it does support reflection, if you white-list classes you want to reflect on. So, yes, no Java interop, but more basic functionality might work.
<headius> ah, well that might be enough to get it to run
<headius> jruby compiled for svm that is
<chrisseaton> Your launcher would be another option to experiment - it's in C++ isn't it? Could write it in Java.
<enebo> headius: This is on my branch too
<headius> chrisseaton: is Windows supported?
<headius> I just saw the announcement in passing
<chrisseaton> No, but it's something we want to do when we have the resources.
<headius> yeah I suppose Swing and JavaFX are out of reach but Windows support would still be very interesting
<enebo> chrisseaton: can svm compile with JNI extensions?
<chrisseaton> Yes, that's a plugin, which shows you how well designed the system is.
<chrisseaton> We used to substitute JNR calls to our our FFI system though.
<enebo> chrisseaton: yeah I was thinking that Java process launching is probably not great but with jnr-posix we can do it
<GitHub97> [jruby] NC-phuh opened issue #4896: Jruby + Rspec scoping issues on Classes with Prepended Modules https://git.io/vbrMI
<chrisseaton> I have an aspiration to support fork in the SVM as well.
<chrisseaton> We already have good support for JVM process launching, because our executables like ruby are native by default, and have a switch to run on the JVM instead.
<enebo> chrisseaton: does svm generate one fat binary or does it dynamically link in anything?
<chrisseaton> One fat binary, that's the magic of it. It links to libc or whatever, but certainly not libjvm or a libsvm or anything like that.
<chrisseaton> You can also generate an so to link into other applications.
<chrisseaton> Half the people I explain it to think it's like a self-extracting zip, where you just have a JVM and append the JAR files - it's not like that - a hello-world app will just be a few KB (GC and things takes up room).
<enebo> chrisseaton: and it is GPL
<chrisseaton> GPL 2 + classpath
<enebo> chrisseaton: I know GPL is source distribution license but I am wondering if there are any concerns there?
<enebo> chrisseaton: so if I pull in Java and it makes it into this fat binary it will be interpreted just like Classpath exception from a usage standpoint
<chrisseaton> I can't answer specific legal questions, sorry.
<chrisseaton> In case I get it wrong :(
<enebo> chrisseaton: yeah I understand
<chrisseaton> But it does specifically have the class path, and it is designed to produce so files.
<enebo> chrisseaton: so svm can emit each lib on classpart as .so if so desired?
<chrisseaton> It's closed world - you need to emit a single so for all your Java code.
<chrisseaton> Because the GC etc needs to know about it all at once at runtime.
<chrisseaton> I think you can create separate isolated VMs though, through an API.
<chrisseaton> So lots of JRuby instances in a single process (doesn't work well with JNR of course).
<enebo> chrisseaton: ok I was just confused by your .so files plural comment
<chrisseaton> Well one per compilation.
<headius> it will be interesing to play with
<enebo> chrisseaton: heh sorry one more question: you say on .so per compilation but doesn't it also make a exe?
<headius> I suppose invokedynamic support is probably a little weak? :-)
<chrisseaton> I don't think so - we use lambdas and things.
<chrisseaton> You probably won't get great machine code from it, as it's not a JIT.
<headius> hmm it could be lambda support without full indy, but that's good to know
<headius> android added lambdas before the rest of invokedynamic
<chrisseaton> Compilation is via Graal, and Graal certainly runs all of indy, so I'm pretty sure it's complete support.
<headius> I'm just trying to reconcile that with the requirement that you register what you're reflecting
<chrisseaton> Sorry I'm not sure I know enough about indy - but I see what you mean - method handles are found by reflection?
<headius> seems like I can't really be doing indy on the fly if reflection wouldn't work
dave____ has joined #jruby
<headius> it's a reflection-like mechanism at least...I don't believe it uses reflection to do it
<chrisseaton> I think your bootstrap method is run as normal... and if you choose to do reflection in there to produce a method handle then that's more limited, but the fundamental invoke dynamic mechanism works and calls your bootstrap method correctly
<chrisseaton> To give you an idea of what it can do - we can compile javac - so that shows you it's not all Truffle specific
<chrisseaton> And we compile tons of your JRuby code successfully.
<headius> ah, well that could definitely be that it only supports constant pool method handles
<headius> not dynamically acquired ones
<headius> that it can statically compile, of course
<headius> it's not really a question about the compiler as much as the runtime
<headius> I'd love to be proven wrong but if you can't do arbitrary reflection it seems logical you wouldn't be able to do arbitrary method handles
dave____ has quit [Ping timeout: 248 seconds]
<headius> it's not a big deal anyway :-)
<chrisseaton> Restricted to constant handles sounds likely - that would support lambdas, right?
<chrisseaton> But you have a non-indy mode anyway, so you could start there.
<headius> right, it's possible to avoid it
<headius> but indy also only really comes into play with our JIT, and we can't use our JIT inside SVM, so...
<headius> there are just a few places we use it outside of JIT and I'd need to rework those
<headius> non-indy mode mostly applies to jitted code
<chrisseaton> For short running command line apps, not JITing might be fine. It could get you the ms startup time we have with TruffleRuby.
<headius> right
<headius> whenever we look at AOT options it's always using our interp
<headius> fun holiday project perhaps
claudiuinberlin has quit [Quit: Textual IRC Client: www.textualapp.com]
<headius> enebo: we need some superinstructions for binding state
<headius> I can't really collapse these because they're three or four separate instrs
<headius> or we need to go the rest of the way to have frame and scope be locals that can be folded away if not needed
<enebo> headius: the latter was what I wanted
<headius> right
<headius> me too
<enebo> headius: although I wanted a lot more. I wanted all frame fields to be locals too
<headius> for sure
<enebo> It largely probably means we need to know what can use them when too. Operation largely was designed for this but it might be too coarse
<headius> but in the absence of that this is quite a bit more bytecode than it needs to be
<enebo> headius: as with all things we should see what benefit we get atm from them being fine-grained and balance if we coarsen whether that hurts our path forward or not
<enebo> headius: My main side desire was redoing constants altogether since our framework cannot really help much with constant lookup and it bloats out our graph
<headius> that would be nice
<enebo> I think the original desire was that we could coarsen n same const lookups to one lookup
<enebo> but calls are everywhere which makes the coasening very unlikely
<enebo> without massive inlining
<headius> yeah
<headius> lots that can change
<headius> yeah look at AddCallProtocolInstructions
<headius> if we coarsened, it wouldn't be hard to do
<enebo> this basically was an attempt to move to only setting up what we needed
<headius> right
<headius> hmm
<enebo> This does help us in scenarios but it could be a bigger instr perhaps if they are just n instrs added contiguously
<headius> yeah
<headius> I may poke at this a bit since I can't do much on the JIT end
<enebo> I think another way we could accomplish this would be to make Instr know what it is from an Operation standpoint
<enebo> Then you could still walk and know what is done with n instrs
<enebo> but I think we should talk to subbu about anything too
<headius> frame fields really need to become operands with [] et al forcing them to exist
<enebo> headius: I remember we also had planned on n prologue entries based on arity
* subbu peeks
<headius> then we'd be able to actually know which fields are needed and start to reduce artificial frame to nothing
<enebo> but I am going to lunch very soonish so I won't be able to contribute
<headius> yeah n prologue would make it easier to expand optional args to avoid boxing
<headius> related
<enebo> headius: I agree but we then also need all places where that operand is used to be passed into all instrs which might use them
<headius> subbu: just looking at jit output and realizing we never moved on with binding management improvements
<enebo> otherwise we cannot know we can remove them
<subbu> do i need to read anything ... or you are mostly discussing something at this piont?
<headius> enebo: right
<headius> we do need that for sure
<headius> but we basically have that already because we walk all instrs asking if they need frame
<enebo> but for some field that will be basically impossible without passing them into all calls
<enebo> at that point we should just accept them for what they are
<enebo> since calls are too prevalent
<enebo> anyways I need to run
<subbu> ah, right .. yes, it stalled at some point when we moved to getting everything ready for shift to jruby 9k .. and it never got picked back up again .. since i got really sucked into wmf work.
<headius> subbu: I might try to play with this a bit
<headius> get the AddProcotol pass to be aware of frame fields on a finer grain, so we can init and clear only the fields actually used
<subbu> sounds good. i'll get involved on demand as you ask me to.
<headius> thank you!
claudiuinberlin has joined #jruby
matthaus has joined #jruby
matthaus is now known as haus
<headius> EnumSet is surprisingly cumbersome to work with
<headius> I guess that's why I never use it
<enebo> headius: the design of IR is a single result per instr
<enebo> headius: so that might be one issue with making n things 1 thing
<headius> well the way it would work is that if a method needs access to a frame, we have a get from the frame and then it's an operand to the call
<headius> so if no methods need frame field, no fields need frame, frame doesn't get pushed
<headius> and it opens up the possibility of separate stacks for separate fields, alternate methods of passing them, etc
<headius> like if we come up with a non-frame equivalent for backref or something
<headius> I'm keen to see if we can get all heap bindings to be free using a combination of pooling and smarter metadata
<enebo> so to start you just want a %frame
<headius> right
<enebo> but then you also plan on only using method name to know?
<enebo> how would IR know if a call needs frame or not?
<headius> this is just the static mechanism
<headius> this call might need this frame
<enebo> so what happens if a method which is not static does need the frame?
<enebo> or do we fall into the no one will alias binding sort of discussion
<headius> well, exactly what happens now, there wouldn't be a frame
<enebo> yeah ok
<headius> the value of this initially won't be eliminating frame more often because it's still mostly the same mechanism
<enebo> I don't actually love the notion of %frame by itself but I am not against the notion of a pass determining frame is not used via compiler pass which would be trivial
<headius> initially the value will be that we can do pre/post logic in a single shot, just saying "preBinding(frame fields to prep)" and "postBinding(frame fields to clear)"
<enebo> but from an evolutionary standpoint %backref, etc... can still get moved out of frame
<headius> yeah the individual operand thing isn't part of the initial work I'm doing
<enebo> I am not sure I love the complication this will cause to callbase/instr/etc...
<headius> that is the long term vision I'd see for the existing opaque frame
<headius> complication?
<enebo> well passing it in
<headius> what complication is that
<enebo> just to the java code
<headius> passing what in?
<enebo> %frame as an operand
<headius> not %frame
<headius> and maybe not even an operand to the call, exactly
<enebo> ok now I am back to not understanding
<headius> %frame = prep_frame somewhere
<headius> %v_99 = retrieve_frame_field(%frame, "backref")
<enebo> ok
<headius> and then the call needs to be dependent on those additional operands somehow but it doesn't exactly evaluate them
<headius> I mean, it could...if we had logic for special way to pass backref or something
<headius> but that's way beyond this
<enebo> so that is fine %frame is some opaque thing with instrs to access
<enebo> if the things which access frame are not used then we can eliminate them
<headius> right
<enebo> any maybe even frame
<headius> this is all just so that frame field accesses can fold away and maybe frame itself can fold away
<enebo> but my original complication comment still stands in the sense we need to make calls know about these extra fields/operands
<headius> or rather that's the main value I see now
<headius> lesser value is that we can see in IR exactly what fields go where and start to work on alternative storage for each one individually
<enebo> yeah I think this has always been our goal designwise in the long term
<headius> like a separate backref stack or something
<enebo> the opaque prep_frame is not neccesarily needed
<headius> well, it is if we want it to go away
<headius> that's cryptic
<enebo> well we need to have enough knowledge to know a frame is not needed
<headius> I mean the call depends on the result of the field get, and the field get needs the frame, and the frame needs the prep
<enebo> but a frame is not a ruby semantic detail in the same way %backref is
<headius> so if the call goes away everything just falls out
<enebo> so if we know at an impl level frame is 3 fields and those 3 field do not exist we can omit a frame
<headius> you're right, prep_frame isn't needed if we ignore the fact that we currently store all frame fields in the same structurs
<headius> prep frame to me is basically push bp
<enebo> yeah I think IR is a compromise of being useful for impl but ultimately would just represent semantics
<headius> or whatever
<enebo> frame is an impl detail but what is in the frame is semantics
<headius> right ok
<headius> I agree
<headius> so yes...the semantics is that call X needs frame field Y
<enebo> ok but with all that said I am not saying we should not use prep_frame but that I think it is more of a convenience for us
<headius> the impl then would know how to efficiently impl that frame
<enebo> so long as we write in terms of those special fields for everything we can maybe eventually kill notion of frame being explicit
<headius> yeah it is, with this logic it might be prep_frame(list of fields to prep)
<enebo> yeah
<headius> something like that
<enebo> I do think JIT being visitor based is causing some issues for knowing stuff
<headius> I mean the vast majority of the time a call only needs a single field, and usually it's backref
<headius> so that's the 99% case really
<enebo> like if three instrs are not present you can eliminate a frame means some extra state juggling in the visitor
<headius> that means most of these frame pushes really *only* need to prep/clear backref and they're doing all half dozen fields
<enebo> but with prep_frame the pass will do it
<headius> right that's it
<headius> and this looks forward to inlining...we can leave protocol instructions in and they just sort themselves out
<headius> once we don't need the frame or scope to pass values then those reads go away
<headius> and then frames go away
<enebo> yeah I don't remember how frame fields happen or don't in inlining I think they do
<enebo> only scope variables go away
<enebo> but this was another reason we wanted explicit fields too
<enebo> then %stack_name = "my_inlined_method" would just exist
<headius> right
<enebo> it would be serialized as an instr and not be a frame field at that point
<headius> so what I'm doing right now is working forward from method binding, which is where these lists of frame-aware methods get populated
<enebo> but that presupposes anything querying stack can get access to that
<headius> now it will also track which fields a given name reads or writes
<headius> so then when we get to flag calculation I can split "NEEDS_FRAME" into "NEEDS_BACKREF", etc
<enebo> yeah definitely
<headius> more better protocol logic
<enebo> definitely mark prep_frame if you make that as a convenience instr which is destined to be removed at some later date
<enebo> mark == some commentary in the instr
<headius> well we already have it now
<headius> PushMethodFrameInstr
<headius> it just doesn't produce or consume anything
<headius> MethodIndex.addMethodReadFields(2, "match","match","sub","sub","sub!","sub!","gsub","gsub","gsub!","gsub!","index","index","rindex","rindex","[]","slice","[]","slice","[]=","[]=","slice!","slice!","scan","partition","partition","rpartition");
<headius> looks like it's working
<headius> 2 is the bit for BACKREF
<headius> you know, as a bytecode compiler guy, it's really annoying that varargs are just compiler magic
<headius> I'm going to write a utility some day to replace all varargs calls with indy
<enebo> headius: so yeah we don't emit it until ACP and only if the scope gets markes
<enebo> headius: in this sense each instr has operation characteristics and this is just a lazy collection of having to walk all that again
<enebo> headius: this double sourcing of info has also bitten us on the ass but an n instr walk every time you want to know is a sucky thing
<enebo> but since it is only add as part of a pass then it is sort of the opposite of having DCE remove it
<headius> yeah
<headius> but we could drop a pass
<enebo> interpreter does need it always so we only do this if we compile
<headius> just always put the instrs in and let it fall off in DCE
<enebo> and compiler passes are cheap in the JIT sense since they are off-thread
<headius> DCE only runs in full and JIT and simple and full both need frame and scope anyway
<enebo> but interp should not pay the penalty if it need not
<headius> er, full and jit run dce, simple and full need frame always
<enebo> interp is strangely proportional to number of instrs visited
<enebo> yeah
<headius> chrisseaton: hey, bigger issue with svm: gems with JRuby exts load them reflectively, from jars
<headius> not to mention most JRuby apps use Java libs from jars at some point even excluding exts
<enebo> hmmm actually how does svm work with native C exts in gems?
<headius> I guess they'd have to compile them in, since they're also sulong'ed into the VM
<enebo> I guess it has compiled graal into it and sulong?
<enebo> then it can load dynamically?
<headius> yeah I guess that's not clear to me, how much more code generation it can do at runtime
<headius> if any?
<enebo> I was under impression graal was compiled into svm image
<headius> I figured if something as simple as reflection has to essentially be made static, it wouldn't be loading anything dynamically either
<enebo> yeah now I am confused as well :)
<enebo> It has to load Ruby dynamically so I think it must handle all the Ruby things
<enebo> which needs graal
<enebo> nirvdrum: ^
<enebo> as I ping the guy with horrible irc notification abilities :P
<headius> chrisseaton: yeah according to svm limitations doc, no invokedynamic and no method handles
<headius> they compile lambda like I figured
bbrowning_away is now known as bbrowning
chli has joined #jruby
chli has left #jruby [#jruby]
<chrisseaton> Graal can JIT in SVM as long as you compile the IR graphs into the image, which you normally just do for Truffle-reachable methods
<headius> chrisseaton: ahh ok thanks
<chrisseaton> This is a case of using Graal as a library rather than via normal compilation from thresholds
<headius> right
<headius> I understand the full JIT pipeline would be a lot heavier
<chrisseaton> So this cuts out bytecode verification and parsing into a graph
<chrisseaton> You could ahead of time include JARs you use in the stdlib (or even common Rails configurations) and substitute reflective loading with direct references
<headius> yeah ok
bbrowning is now known as bbrowning_away
sotrhraven has joined #jruby
sotrhraven has quit [Quit: sotrhraven]
<lopex> what's special about method reachability in truffle ?
<rtyler> the glitter, definitely
Puffball has quit [Remote host closed the connection]
claudiuinberlin has quit [Quit: Textual IRC Client: www.textualapp.com]
dave__ has joined #jruby
dave__ has quit [Ping timeout: 265 seconds]
shellac has joined #jruby
shellac has quit [Client Quit]