<headius[m]> well that failed surprisingly
ur5us has quit [Ping timeout: 260 seconds]
ur5us has joined #jruby
_whitelogger has joined #jruby
<kares[m]> headius fileutils error? been mostly noticing marshall data 'too short' failures ... not sure what happened, maybe smt about RGs.org itself that got the mavengem bridge broken ;(
<kares[m]> too much ruby build related stuff, just been tuning jossl's built and I am still not sure why on Java 11 it's making maven behave differently
<kares[m]> did you notice, there's a PR from David to fix parts of the build issues (ar ruby-maven repo)
ur5us has quit [Ping timeout: 256 seconds]
drbobbeaty has quit [Ping timeout: 246 seconds]
Liothen has quit [Ping timeout: 244 seconds]
havenwood has quit [Quit: ZNC 1.8.1 - https://znc.in]
Liothen has joined #jruby
headius[m] has quit [Ping timeout: 260 seconds]
havenwood has joined #jruby
havenwood has joined #jruby
havenwood has quit [Changing host]
headius[m] has joined #jruby
havenwood_ has joined #jruby
jean[m]1 has quit [Ping timeout: 260 seconds]
nieve[m] has quit [Ping timeout: 260 seconds]
nieve[m] has joined #jruby
BlaneDabneyGitte has quit [Ping timeout: 260 seconds]
xardion[m] has quit [Ping timeout: 260 seconds]
JasonRogers[m] has quit [Ping timeout: 260 seconds]
daveg[m] has quit [Ping timeout: 260 seconds]
fzakaria[m] has quit [Ping timeout: 260 seconds]
BlaneDabneyGitte has joined #jruby
daveg[m] has joined #jruby
xardion[m] has joined #jruby
JasonRogers[m] has joined #jruby
fzakaria[m] has joined #jruby
jean[m]1 has joined #jruby
havenwood has quit [Ping timeout: 256 seconds]
havenwood_ is now known as havenwood
havenwood has joined #jruby
havenwood has quit [Changing host]
nirvdrum has joined #jruby
kphns has joined #jruby
<kphns> Hello! I am encountering an issue, but I'm struggling with how to reproduce it in a smaller fashion so I can file a bug report on it. I'd like some advice, if anyone has time. Here goes:
<kphns> I have instances that get extended with module Y. module Y has several methods, among them :a and :b. :b calls :a. Both methods are uniquely named so that a find/grep through my project and through the entire .rvm directory find no other instances of the methods. At first, it works as I expect: calling both :a and :b on instances extended with module Y works. At some point during execution, though, it changes so that
<kphns> calling :b on those instances returns a NoMethodError... and a "did you mean" suggestion of :a.
<havenwood> kphns: Just to clarify, you have an `extend Y` in class X?
<kphns> no. I have instances x of class D, E, F, etc that are x.extend(Y)
<kphns> so that then I can x.a and x.b.
<havenwood> kphns: Gotcha.
<enebo[m]> kphns: if it changes after some amount of time it sounds like a bug when it full compiles/JITs
<kphns> that was my suspicion -- that something saw that :b wasn't being called from inside the module and didn't keep it around.
<enebo[m]> kphns: This will affect perf but you can try JRUBY_OPTS="-Xjit.threshold=-1 --dev" to see if it goes away
<enebo[m]> assuming it always happens
<enebo[m]> That will make it only use our startup interpreter
<enebo[m]> kphns: but feel free to open an issue and we are more likely to fix it quickly if you can reduce it to a bite-sized repro
<kphns> it does always happen, eventually, but of course not in my dev/test environment because they're not exercised nearly as much
<kphns> and get restarted frequently
<kphns> that's what I am having trouble with -- I would have filed a bug already if I had any idea of how to make a small demo to file a useful bug
<enebo[m]> instances getting extended I would say is not too common in Ruby. Perhaps it will be easy to repro.
<enebo[m]> ok yeah that tends to be an issue :)
<enebo[m]> kphns: please open with any info you can supply
<kphns> I will spend some time trying to write a small use case and fiddling with jit thresholds to see if I can make it happen, and then file something even if I can't.
<kphns> thanks!
<kphns> (oh, and yes, module Y is not used in any other way)
<headius[m]> kares: I did see the PR, and I think if we're going to keep using these plugins we need to start taking them over ourselves
<headius[m]> not that I'm excited to take over maven plugins but at least the ones we need for the build should be maintained
<headius[m]> enebo: did you see the DynamicScope6 bug report?
<headius[m]> I'm going to get this enumerable size thing passing and then look at recent bugs
<headius[m]> kares: the fileutils thing I was referring to seems to be happening to all master builds, randomly during the build portion they will error with a file not found - fileutils during one of the Ruby Maven plugin steps
<headius[m]> so around half the builds started failing for no obvious reason
<headius[m]> kphns: if you can at least sort out the structure of those classes we might be able to intuit some possible cause... thanks for working with us, it definitely does sound like a bug
<enebo[m]> headius: I have not
<headius[m]> kalenp said it is affecting them but turning off background jit works
<headius[m]> so seems like the concurrency fixes for IR did not work or made something worse
<enebo[m]> headius: oh yay
<headius[m]> yes
_whitelogger has joined #jruby
<headius[m]> enebo: this issue from kphns could be a scope or frame issue from concurrent jit too, like it starts calling against the wrong type
<headius[m]> I'm going to look into this and the fixes we made for .12
<lopex> numbers?
<headius[m]> not good ones!
<lopex> why
<headius[m]> half-fixes for concurrency issues always go so well
<headius[m]> I blame myself
<lopex> fixing concurrency usually means worse perf though
<headius[m]> in this case it's lateral... the concurrency problem is in jitting or optimizing IR at the same we're running it without properly isolating those data structures
<lopex> Ir or runtime ?
<headius[m]> only thing fixing it would slow down is optimiizing
<lopex> so IR..
<lopex> oh so is it like the compiler is doing it's job while IR and counters are changing ?
<headius[m]> yeah specifically flags indicating whether the IR has been optimized are still shared
<headius[m]> so the bug that's coming up now is that we end up with unoptimized IR running against optimized flags and not finding the frames or scopes in place it needs
<lopex> on another thought, is there a case where compiling something twice is worth while not syncing correctly ?
<headius[m]> enebo: I'm looking at the fix I made for 9.2.12 and mostly it just punts
<headius[m]> b0a399039afddef3c0e52749725a42e89c00acf7
<headius[m]> basically it tried to work around the issue by always expecting startup IR to not have call prorocol (and so it needs frame and scope pushed)
<headius[m]> it eliminated the errors reported against .11 for us and for kalenp's group
<headius[m]> but now they see scopes not aligning
<enebo[m]> startup IR cannot have those so that expectation is fine
<headius[m]> oh I remembered why I thought kphns issue could be related: if the unoptimized scope with no dynscope pushed ends up using a previous call's scope it will overwrite variables there
<headius[m]> so seeing the "only supports scopes with X variables" is the LUCKY case
<headius[m]> the unlucky case is that we get things nuking state in other calls
<enebo[m]> yeah if it has enough it just writes to the wrong scope
<headius[m]> which could easily explain why suddenly kphns doesn't see a method that should be there
<headius[m]> and sees one on a different type
<enebo[m]> maybe
<enebo[m]> It definitely could
<enebo[m]> I just don't know if it will
<enebo[m]> headius: I am a bit confused about this commit and your statement above
<enebo[m]> startup interpreted code cannot have ACP
<headius[m]> correct
<headius[m]> so my change makes that explicit and only does call protocol if we have a fullInterpreterContext and it says it has protocol
<headius[m]> before it was blindly asking interpreterContext for protocol
<headius[m]> which might be startup OR full
<enebo[m]> sure
<enebo[m]> So this explicit change just removed the error in some fashion but did not fix some underlying issue
<headius[m]> yes
<headius[m]> it just seemed to be enough for the reported cases
<headius[m]> I tried several ways to fix the underlying issue but they're super invasive
<enebo[m]> Which I guess is simply that we still have some data race where it thinks something has (or doesnt) have ACP when the opposite is true
<headius[m]> flags being shared is the core problem with most of this
<headius[m]> they are being read and altered concurrently regardless of what we do
<enebo[m]> well yeah and two threads compiling the same method as well I suppose
<headius[m]> or one thread compiling it, even if not in the background, while another one tries to execute it
<enebo[m]> it could also be a child and parent in flight at same time
<headius[m]> I'm looking now at whether the jit tasks could clone the scope before doing anything to it
<headius[m]> it's brute force but it would be less invasive than trying to fix the messy innards
<enebo[m]> well I think one problem is just that not all flags are for IRScope specifically
<headius[m]> I'm not even sure this will work because some other thread might mutate flags while we're cloning
<headius[m]> they're touched in a million places
<enebo[m]> point in time flags may be true of full once full is made but it is a property of full because of what was run on it
<enebo[m]> Operationally removing a scope field has nothing to do with IRScope really. It is more of an piece of optimization which is also living in IRFlags
<enebo[m]> I guess there are actually two pieces to this
<enebo[m]> 1. What flags are used for execution
<enebo[m]> 2. Which flags describe a scope as a truth vs as their current state
<enebo[m]> The venn of those two should just get put into Full and not IRScope
ur5us has joined #jruby
<headius[m]> yeah I started trying to split but again that's invasive for x.x.y
<headius[m]> we do not have a clean way to fully duplicate a scope
<enebo[m]> but do we really have more than one flag we look at? I am thinking we might be able to just split that out
<headius[m]> well, we have call protocol, frame needed, scope needed
<headius[m]> at least those three will step on each other across threads
<enebo[m]> oh I guess the ones we have getters for on Full
<johnphillips3141> might be helpful to have a jruby startup flag to force serialization of certain things so there's something to play with when hitting issues like this one
<headius[m]> moving flags into IC might be a possibility
<enebo[m]> and those should be part of construction of Full really but we have that lazy thing for IRScope
<headius[m]> then until it flips to full you only see startup IC's flags
<headius[m]> once it fully flips you see full IC's flags
<johnphillips3141> almost everything that runs in parallel in the JVM has a way to defeature
<headius[m]> we already have instructions in there
<enebo[m]> johnphillips31416: -Xjit.background=false
<headius[m]> that still won't stop two threads from stepping on each other
<headius[m]> it just stops jit from running in background of the same thread
<enebo[m]> headius: I just mean only moving ones specific to full vs startup
<headius[m]> I was referring to background=false
<headius[m]> may reduce problems but won't eliminate them
<enebo[m]> ah yeah I guess so
<enebo[m]> so I do think this problem is literally just those three flags
<enebo[m]> your other fix was to make Full construction itself lazy but complete
<enebo[m]> if the passes were included as part of that then we would just write those three values as part of making Full
<enebo[m]> the passes themselves would just write to the full directly instead of scope
<headius[m]> those three are definitely a problem but I don't know that they're the only problems
<enebo[m]> well my #1 is what flags do we look at...although perhaps it is not just flags affectred
<headius[m]> a first pass could split flags into those that never change (refinements) and those that might change due to optimization
<enebo[m]> at this point it seems like the main issue and we did spend weeks removing IRScoper from runtime execution
<headius[m]> and really the ones that never change should just move into booleans and not be flags anymore
<headius[m]> that's not too invasive
<headius[m]> one at a time
<enebo[m]> yeah
<enebo[m]> I think the main issue with a PR would be to have a reference to full in the passes themselves
<headius[m]> I got a late start today so I can work on that this evening and start an incremental PR to isolate these flags
<headius[m]> refinements get set immediately so that can move right now
<enebo[m]> Let's just reiterate some stuff from the last time we tried to fix this
<headius[m]> anything else that's a parsed condition can do the same
<headius[m]> has super, etc
<enebo[m]> FullIC is made in whole now vs before the IR deserialization tried to make instr decoding lazy after the fact
<headius[m]> yeahI believe so
<enebo[m]> That solved a big part of this problem
<enebo[m]> So no fullic until there is one and two threads racing to make the same one will just make one
<enebo[m]> so one wins
<headius[m]> yeah
<enebo[m]> but we still have some temporal issue because when we ask for these three flags in a race it might ask IRScope at the wrong moment and get pre-ACP result
<enebo[m]> or other two flags perhaps
<headius[m]> yeah
<headius[m]> the ICs memoize booleans but they're reading from the same mutable flags set
<enebo[m]> To tie this in a bow...I think so long as those are booleans on fullIC and set before we assign it to IRScopes field then racing versions of itself will still see the same boolean value
<headius[m]> and reducing the exposure of these flags will help get there
<enebo[m]> that is also true if we can
<headius[m]> it's a last mile of moving state into IC to isolate the different forms
<headius[m]> I just looked at other state in IRScope and most of it is static
<enebo[m]> well I think most of those flags are fine in IRScope but they have to be truisms and nothing to do with runtime execution
<headius[m]> nearest closure, parent scope, static scope, stuff like that doesn't change
<headius[m]> yeah
<headius[m]> runtime execution state must all live in IC
<headius[m]> (or implicitly in jitted method)
<enebo[m]> IRFlags is for analysis but must just be "this is always true" sorts of things
<enebo[m]> anything which is about frame/scopes of live data needs to live in ICs
<enebo[m]> but some inlining flags also should not be in IRScope
<headius[m]> I'm going to start by killing flags for any bits that are static, like hasRefinements or hasSuper
<enebo[m]> but inliner will need to change to work but it is still "experimental" so that is ok
<headius[m]> stuff that never changes should just be booleans in IRScope and never change
<enebo[m]> yeah they can be
<headius[m]> I guess the alternative is the other direction
<enebo[m]> I think for .13 though you should just address those three flag values
<enebo[m]> for 9.3 I think IRFlags has been more trouble than booleans
<headius[m]> hmmm that might be simpler
<headius[m]> I can start with hasCallProtocol... make it only live in IC
<headius[m]> that's a smaller bite for now
<enebo[m]> yeah smaller for .13 and larger for 9.3
<enebo[m]> but to me I look at them as two things as well
<enebo[m]> runtime state should be in IC and IRFlags has not been fun to use
<headius[m]> I have a picture of what 9.3 should do but it's a more drastic shuffling of state objects
<headius[m]> atomically making any update so you never see partial updates
<enebo[m]> well yeah that sounds good
<enebo[m]> :)
<headius[m]> hopefully that will ultimately just be flipping IC from startup to full
<headius[m]> if we get all this state in ICs
<headius[m]> fixing flags should be the bulk of the problem though
<enebo[m]> yeah I mean what state is not in the IC other than those three fields. I have actually been doing this over a very long time
<headius[m]> not much
<enebo[m]> Like all LVA data was moved into full
<enebo[m]> A second motivaiton for those moves was not safety but memory
<enebo[m]> Most Scopes never make it to full
<enebo[m]> so why have these unused fields
* headius[m] uploaded an image: Screen Shot 2020-07-14 at 16.18.01.png (50KB) < https://matrix.org/_matrix/media/r0/download/matrix.org/goWXwVHUWxeZaeRKXglTQUNw >
<enebo[m]> but logically it also makes sense...why not have Full have LVA data since startup cannot even use it and and a more optimized scope may need to get its own LVA data
<headius[m]> nearly all of that is parsed structure
<headius[m]> some is builder state which is ugly but not visible until done building
<enebo[m]> alreadyHasInline and compilable are for inliner but should not live there but I guess I knew that before
<headius[m]> that needs to go into a builder state object somewhere else
<headius[m]> later
<headius[m]> flags is pretty much the thing
<enebo[m]> yeah otherwise it looks like some flag values
<enebo[m]> yeah and I have actually moved quite a bit into builder
<enebo[m]> some of these indices like nextClosureIndex could still make it to builder
<enebo[m]> tempvariableindex is weird because we do still make new temps in apsses
<headius[m]> so this evening I will look at call protocol flag
<headius[m]> kill the flag and move it into an IC boolean
<enebo[m]> yeah I see three inline fields whatever in flags and maybe nextClosureIndex
<enebo[m]> newLabel cannot go because we may makde more labels and same with temps
<headius[m]> nextClosureIndex is probably builder state
<headius[m]> just numbering closures as it compiles
<enebo[m]> yeah I think so
<headius[m]> hell some of these may be so we can avoid ArrayList 🙄
<enebo[m]> I have moved quite a bit into builder already too I guess I missed that
<headius[m]> those are noise but not a risk
<enebo[m]> remember we did fret about memory of IR quite a bit a year or two ago
<headius[m]> I could box them up into a state object right now with zero intrusion and they'd be out of the picture
<enebo[m]> but they used to be common data structures before
<enebo[m]> 'them" means which thing?
<headius[m]> the builder state
<headius[m]> just to get it out of the picture
<headius[m]> but I won't for now, you have a better grip on moving those into builder itself
<headius[m]> and they're not a problem
<enebo[m]> well it just goes into IRBuilder as a field
<enebo[m]> I have already moved like 90% of the fields to it already
<headius[m]> for .13 I will focus on the problematic flags
<headius[m]> I will push this as a PR on ir_concurrency branch on JRuby repo
<enebo[m]> IRBuilder goes away and it is simple for it just to ask up
<enebo[m]> ask up the scopes since it is a total analogue
<enebo[m]> to IRScope
<headius[m]> yeah
<lopex> is there a lot of bifurcation between ast and IR nodes?
<enebo[m]> well I guess it depends what you mean
<headius[m]> most of this structure just reflects what's in AST
<headius[m]> that's the safe part
<enebo[m]> There is typically more same or more instrs per AST node
<enebo[m]> Operands are nodes which represent them are the same
<enebo[m]> but for example IfNode will end up with a jump and some labels
<enebo[m]> Is that more or the same?
<lopex> ifNode as in ast ?
<enebo[m]> but if we can tell there is no else it generates it differently so I guess that is a bifurcation
<enebo[m]> yeah
<enebo[m]> all *Node is ast
<enebo[m]> lopex: the mapping is about as straight forward as you would imagine if you examine the *Instr
<enebo[m]> I think the only scary part is defined?
<lopex> but IR does follow the general idea of sea of nodes right, so just boundaries and basic block right ?
<lopex> whatever the terminology
<enebo[m]> yeah lists of BBs with jumps between them
<lopex> I guess that part is univerasal acros sall impls
<enebo[m]> and removing jumps when we can
<lopex> and how IR is depdens on how ast is hot ?
<lopex> er how Ir big is
<lopex> er
<enebo[m]> IR is probably generally proportional to AST size
<enebo[m]> IR tends to be larger
<lopex> yeah, more refined
<enebo[m]> but unless we can tell a path is never taken we generate for all pieces of AST
<lopex> less structure but more linearity
<lopex> an the graph is only derermined by conditions right ?
<enebo[m]> yeah CFG has flow still but by the time we JIT we linearized all the instrs back to a list
<lopex> ok
<lopex> any special treatments for phis ?
<enebo[m]> You know we could probably save some memory by deleting CFG if we know we do not have inlining on once full is made
<lopex> so heuristics ?
<enebo[m]> I do not think it is massive amount of memory but it wouldn't hurt
<enebo[m]> part of me does not want to because I still believe we can be an always on deoptimizing runtime
<lopex> how do I visulize IR in jruby ?
<enebo[m]> lopex: phis are only for SSA right?
<enebo[m]> lopex: we are not SSA
<lopex> ah
<lopex> I though any merge is phi
<lopex> thought
<enebo[m]> lopex: we had a GSOC jruby-visualizer but it is long dead
<lopex> er, so where I'm wrong about phis there
<lopex> or how IR is not SSA
<enebo[m]> subbu is not on right now but he made initial decision to not be SSA but tbh I think one problem with SSA is it probably makes interpreting horrible
<subbu> I am here, but I am in the middle of something.
<enebo[m]> you can rewrite all those single assignments to some other reduced set (register allocation) but that is just extra work for something where the first thing you do is interpret it
<lopex> SSA requires registre aloocation ?
<enebo[m]> nothing prevents us from making an SSA form though (I think). I guess that would come down to benefits
<lopex> I thought it;s just a purely functional graph
<lopex> with merges as phis
<enebo[m]> lopex: if I write out a sequence of instrs where every assignment is only used once I have to save/load from something
<enebo[m]> although we could use a graph form
<lopex> yeah, ok, I guess I need some reading
<lopex> enebo[m]: so for every state ?
<lopex> or explicit vars
<enebo[m]> lopex: but you are correct writing SSA in graph form would just mean the return would go to next thing which needs it and you could make an interpreter with that
<enebo[m]> lopex: I was thinking about it in terms of what we have which is a list of (result = instr operand * )*
<lopex> what does that notation mean ?
<lopex> those stars
<enebo[m]> kleene stars
<enebo[m]> zero or more operands and zero or more instrs
<lopex> oh so result is rewritten many times ?
<enebo[m]> well result is a variable of some kind
<lopex> ah ok
<lopex> so a list
<lopex> ok yeah
<enebo[m]> yeah a list
<enebo[m]> This indirection through variables is almost entirely the difference in interp speed
<enebo[m]> from 1.7
<lopex> what indirection ?
<lopex> er sorry for asking those
<enebo[m]> load/stores through variables vs an AST interp where result is a return value
<enebo[m]> we pay a penalty for having temp vars for interpretation but we have a form which is more useful for writing compiler passes
<lopex> ah, yeah
<lopex> by bits and pices I'll put it together finally
<headius[m]> you and me both
<lopex> dont laugh at me just, taking first glimps at IR
<subbu> I am not going to read the whole backlog, but is there a tldr? I see more regular IR related conversations in recent months including pings. :)
<subbu> Has something changed or is something changing? :)
<subbu> but reg SSA, some jruby mailing list post about SSA is how I got involved with jruby in the first place. :)
<lopex> subbu: it will need some digging
<subbu> the idea was to build SSA form but only when it became clear that there were plans to use it appropriately.
<subbu> lopex, what will need digging?
<lopex> the mailing lists
<headius[m]> SSA doesn't work well for interpretation either and we knew we'd need an interpreter
<lopex> and yeah "appropriately" is the key
<subbu> but, back to my earlier qn. ... i sense more IR related conversation and activity around here and are some new plans shaping up?
<lopex> subbu: it was just me just to get better information about it
<subbu> ah, got it.
<lopex> headius[m]: that i series thingy where are we ?
<headius[m]> somewhere under fixing this issue unfortunately
<lopex> wrt I forgot to add
<headius[m]> I haven't gotten back to the guy
<lopex> so it's just a case if delivering a ffi binary for that system right ?
<lopex> fortunately system i has a single binary for anything thanks to https://en.wikipedia.org/wiki/TIMI
<lopex> on system i java is the only sytem that produces compile time bineries
<headius[m]> yeah mostly, and probably taking some time to get it tested and know it works
<lopex> otherwise the whole system is bytecode interpreted
<lopex> in c
<lopex> even c
<headius[m]> updating jnr-ffi type mappings, jnr-constants, etc
<lopex> c is interpreted as well
<headius[m]> hah well that's interesting
<lopex> headius[m]: look at that timi article
<lopex> you can essentially do dd to another system of different arch
<headius[m]> I assume you mean the one about the i series instruction set and not the medical term
<lopex> and it will still work
nirvdrum has quit [Ping timeout: 258 seconds]
<lopex> headius[m]: the whole system is bytecode interpreted conceptually
<headius[m]> yeah that's pretty wild
<headius[m]> I suppose it's a bit like how CLR works
<lopex> and it;s binary comatible
<headius[m]> "JIT" but only in terms of generating a binary specific to that platform the first time you run it
<lopex> headius[m]: hey IBM did that decade ago
<lopex> headius[m]: by means of AOT ?
<headius[m]> yeah it's not bad and especially for a low-level instruction set it makes a lot of sense
<lopex> yeah
<headius[m]> for a high level language it's a toss-up between that and actual runtime profiled jit for efficiency but these days I suppose you could imagine something like LLVM instructions being redistributed and you just do the last phase at install time
<lopex> headius[m]: and, in this ecosystem there is a term "entire"
<lopex> entire is a binary backup which you can feed to another arch
<lopex> how cool is that
<headius[m]> yeah I can't think of anything equivalent on modern systems
<lopex> yeah
<lopex> and it;s all hardware
<lopex> indexing cpus, security cpus
<lopex> all hardware
<lopex> but, in like 10 years ago i series were called mainframes for the poor
<lopex> but now mainframes are so costly only china uses them
<lopex> of course it's a myth
<lopex> but hey
<headius[m]> the cloud is the mainframe now
<lopex> but there no mainframe
<lopex> headius[m]: we're almost entirely usting http://jt400.sourceforge.net/ now
<lopex> insane
<headius[m]> wow lots of stuff to customize
<lopex> look at prop list
<headius[m]> yeah
<lopex> headius[m]: it;s not evant that
<lopex> headius[m]: if you want to have data isolation as isolation commited
<headius[m]> how closely do you work with these systems?
<lopex> then look at "concurrent access resolution"
<lopex> three choices
<lopex> skip lock -> disregard loecked
<lopex> lock -> well lock
<lopex> wait -> well wait
<lopex> er
<lopex> ah,
<lopex> wait for outcome
<lopex> and then read commited
<lopex> headius[m]: it's madness
<lopex> headius[m]: no closely at all
<lopex> headius[m]: it;s just a bizzare system you have to work with
<lopex> when ppl tell you
<headius[m]> yeah you really have to know what you want these operations to map to in the system
<lopex> yeah, we some really tricky perf wise problems for those systems
<headius[m]> ok call protocol wasn't too hard to move to IC
<headius[m]> there's a problem with having fullInterpreterContext field readable before it's done optimizing though
<lopex> but those are running on some quite substantial scale
<lopex> I wish I coud run jruby on one once
<lopex> for now I run jruby interfacing one of them
<lopex> but i series native that discuraging
<headius[m]> yeah I am curious how well we run
<headius[m]> directly on the system
<lopex> do you know they still run WS on those ?
<headius[m]> hah nice
<headius[m]> is WS even a product anymore?
<lopex> yeah I think
<lopex> but ffi on those would be interesting
<lopex> not very helpful though
<headius[m]> we still run pretty well without the jffi stuff but it will be good to have that working
<lopex> the reason is jt400 kindof matches system i remote functionality
<lopex> you can do anything with jt400
<lopex> btw
<lopex> headius[m]: look how far ppl have gone https://bitbucket.org/ibmi/opensource/wiki/Home
<lopex> whole community could be built on it
<lopex> and hah "JRuby ActiveRecord DB2 for i Adapter"
<headius[m]> hahah I just saw that too
<headius[m]> we're important!
<headius[m]> this is a really weird roundup
<lopex> yeah
<lopex> really weird
<headius[m]> I suppose it's hard to get long-time i-series folks to trust OSS
<lopex> quite a lot activity for a such close platform
<lopex> *closed
<lopex> good for them for jtopen
<headius[m]> yeah
<lopex> for was there's some jython deployment framework
<lopex> both for system i and mainframe
<lopex> but it;'s from those old days of "application servers"
<lopex> headius[m]: do you laugh at that to ?
<headius[m]> yeah I suppose that makes more sense
<headius[m]> it's funny to think about application servers now
<lopex> there's none
<headius[m]> I still say serverless is just a big application server
<lopex> as a node
<lopex> but no managed beans from old days
<lopex> remember ?
<headius[m]> well I think it's sort of implicit
<headius[m]> and no good standards
<headius[m]> you can use some database as a service through a cloud-specific API that might be ORM-like
<headius[m]> it's the same stuff all over again, except now the application server is the cloud and the services are the functions in fass
<headius[m]> faas
<headius[m]> ok I'm going to try to get this fix done, bbl
<lopex> headius[m]: but even from old days I was always resistant to managed resources
<lopex> app wise
<lopex> what does it mean app1 has respource res1 managed by server3
<lopex> headius[m]: gn for now!