slyphon has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
slyphon has joined #jruby
ahorek has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
slyphon has quit [Client Quit]
hoi has joined #jruby
hoi has quit [Client Quit]
hoi has joined #jruby
hoi has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
hoi has joined #jruby
hoi has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
hoi has joined #jruby
jrafanie has joined #jruby
slyphon has joined #jruby
hoi has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
Antiarc has quit [Ping timeout: 256 seconds]
hoi has joined #jruby
Antiarc has joined #jruby
jrafanie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<hoi> anyone has connect redshift with JRuby?
Caerus has joined #jruby
projectodd-ci has joined #jruby
sgeorge has joined #jruby
sgeorge has quit [Remote host closed the connection]
hoi has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
sgeorge has joined #jruby
sgeorge has quit [Remote host closed the connection]
ahorek has joined #jruby
ahorek has quit [Client Quit]
hoi has joined #jruby
rdubya has quit [Ping timeout: 260 seconds]
Puffball has quit [Remote host closed the connection]
shellac has joined #jruby
shellac has quit [Read error: Connection reset by peer]
ahorek has joined #jruby
ahorek has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
slyphon has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
hoi has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
claudiuinberlin has joined #jruby
<GitHub148> [jruby] boris-petrov opened issue #5260: Cannot install RuboCop 0.58.{1,2} https://git.io/fNRGX
ahorek has joined #jruby
ahorek has quit [Client Quit]
drbobbeaty has joined #jruby
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
drbobbeaty has joined #jruby
rdubya has joined #jruby
ahorek has joined #jruby
ahorek has quit [Client Quit]
<kares> enebo: got much better _parse by going native and doing string checks MRI does
<kares> which means we can avoid going to parse_eu and parse_us methods for strings like '2018-01-01'
<kares> this is what you already noticed locally by commenting them out - gives us 2x improvement
<kares> only thing JRuby (non-graal) is worse is Date.parse('2018-07-17', false) 17.8 vs 12.2 on MRI
<kares> ... Date._parse('2018-07-17 21:20:55') is beaten, yay!
<kares> now to polish up the PR and hopefully good to go ...
jrafanie has joined #jruby
jrafanie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
sgeorge has joined #jruby
sgeorge has quit [Remote host closed the connection]
hoi has joined #jruby
<kares> last piece - added some custom code to avoid time parsing for raw dates -> 3x faster, MRI tests seems to pass
sgeorge has joined #jruby
slyphon has joined #jruby
jrafanie has joined #jruby
slyphon has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
slyphon has joined #jruby
sgeorge has quit [Remote host closed the connection]
sgeorge has joined #jruby
sgeorge has quit [Remote host closed the connection]
claudiuinberlin has quit [Quit: Textual IRC Client: www.textualapp.com]
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
claudiuinberlin has joined #jruby
sgeorge has joined #jruby
claudiuinberlin has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
hoi has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<enebo> kares: you can land what you have if it passes tests
<enebo> kares: I know you are done for now but I keep looking at us using a RubyHash and realize if it was all native it would be a simple value type and likely just dumb fields
<enebo> kares: so I guess we can get more later if we want to keep pushing that direction
hoi has joined #jruby
hoi has quit [Client Quit]
claudiuinberlin has joined #jruby
sgeorge has quit [Remote host closed the connection]
sgeorge has joined #jruby
sgeorge has quit [Ping timeout: 244 seconds]
sgeorge has joined #jruby
sgeorge has quit [Ping timeout: 244 seconds]
Puffball has joined #jruby
<havenwood> I'm updating ruby-versions metadata so ruby-install can use the new Maven location to fetch binaries. Most of the bins are the exact same checksums, but I noticed a few anomalies with the checksums compared to the same versions on AWS.
<havenwood> jruby-dist-1.7.19-bin is a different checksum for both zip and tar.gz, and jruby-dist-9.1.17.0-bin.zip is as well, just the zip.
<havenwood> The rest of the binaries are the same checksums compared to the old versions.
<havenwood> enebo: Should I just defer to the new checksums ^ for the few that changed?
<havenwood> I'm just updating ruby-versions for the dist-bin, version 1.7.5 and later.
<havenwood> If earlier .tar.gzs are added, or more src versions, I'd be happy to update ruby-versions with those as well.
<havenwood> Here's the md5 checksum comparison: https://gist.github.com/havenwood/efe34bfc058283324d1687f1dc8f0efe
sgeorge has joined #jruby
<kares> enebo: okay, CI looking good so going to land on master
sgeorge has quit [Ping timeout: 264 seconds]
<kares> right, that RubyHash could go away - but you might need to move _parse_xxx left-overs to native
<kares> which is straight forward but I got tired of it, for now :)
<kares> might take a look at whether DateTime.iso good get some boost tomorrow
<GitHub128> [jruby] kares closed pull request #5259: [refactor] improve Date parsing performance (master...date-speed) https://git.io/fN4l0
<GitHub39> [jruby] kares pushed 12 new commits to master: https://git.io/fN0Wv
<GitHub39> jruby/master 3ff8998 kares: [refactor] use match? instead of =~ where possible
<GitHub39> jruby/master ddaad41 kares: review regexp match?-ing - MRI doesn't do dynamic dispatch...
<GitHub39> jruby/master 0e00652 kares: use internal str.sub! for date parsing (without frame info)...
<GitHub36> [jruby] kares closed issue #5255: Date parsing (still) noticeably slower than MRI https://git.io/fNn7P
sgeorge has joined #jruby
sgeorge has quit [Remote host closed the connection]
sgeorge has joined #jruby
<GitHub178> [jruby] danielford opened issue #5261: File.utime failing with Errno::EINVAL on JRuby 9.1.17.0 https://git.io/fN08q
<lopex> enebo: they also use some other specialized code in "match?"
<lopex> like rb_str_subpos
<enebo> lopex: ok
<lopex> irrelevant for our benchmark now
<enebo> lopex: that method is just optimized way of walking into the string n chars?
<lopex> enebo: can you remind me what for is that useCnt there ?
<enebo> lopex: no I just deleted it in my version since it did not make sense to me
<lopex> enebo: and release joni
<lopex> enebo: and it's buggy for me
<enebo> lopex: oh and btw matcherSearch defaults to search and not match
<lopex> enebo: they use search in "match?"
<enebo> apparently
<enebo> I got a good jump out of switching that
<enebo> It is in my diff
<enebo> I did not change it cleanly though
<enebo> lopex: but you raise a good question perhaps capturing match should use match as well
<lopex> enebo: going to buy a beer be back in 10 mins
<lopex> apparently you cant buy beer in Poland after 10pm
<lopex> since yersterday
<enebo> LAME
<lopex> police country
<lopex> enebo: can you release joni ?
<enebo> lopex: sure is it ready then?
<lopex> yes
<enebo> ok will release then
<lopex> enebo: we can sacrifice that msaBegin/msaEnd for now
<enebo> did you remove it then?
<enebo> in else
<enebo> or you are saying we can remove it later
<lopex> now, I mean it's unnecessarily computed now
<lopex> but we need it still
<enebo> ok so you did not remove it but we don't need it in this case
<lopex> enebo: the bigger gain in where there's no captures at all and we dont allocate regions
<enebo> I don't know what you are telling me
<lopex> enebo: we have three cases for that branch
<lopex> enebo: captures > 0, captures == 0 -> region -> null, and forced null for region
<lopex> so we'd have to distinguish it somehow
<lopex> like null region object or something like that
<enebo> lopex: ah I see so you mean we need more state if we want to not have that else compute
<lopex> enebo: or pass null region object on matcher creation
<enebo> lopex: which is almost funny since I passed in a boolean for that and you decided we should pass null
<enebo> lopex: or is passing null the same thing possibly?
<lopex> enebo: since for "a" =~ /a/ we still need to create beg[0] / end[0]
<enebo> lopex: we set a boolean if region passed is null?
<lopex> enebo: no since we have to computer the boundaries
<lopex> er
<lopex> enebo: I put that decision a bit earlier
<enebo> lopex: I guess what I heard you say is that else cannot be removed since we want to support 3 cases but one case does not need it
<enebo> yeah I just said you pass region in
<lopex> enebo: yeah, third state could be null region object
<lopex> enebo: but the computation is very cheap so lets leave it for now
<enebo> I guess my only confusion is msaRegion is sometimes null or eager would not work
<lopex> eager is not used now
<enebo> lopex: yeah but you said you wanted it
<enebo> lopex: I am just trying to understand how we can tell the difference between the 3 cases
<enebo> lopex: I am ok with not doing it now
<lopex> so if msaRegion is null then msaBegin / msaEnd play role of msaRegion[0].beg / msaRegion[0].end
<lopex> enebo: ^^
<enebo> meaning we always need them because we have no regions OR that we need them in case we want to eagerly make a region?
<lopex> lest forget eager thing completely
<lopex> not used
<enebo> ok so we need those fields regardless of whether we have regions or not
<lopex> enebo: except for "match?"
<lopex> that's the third case
<enebo> so if we search with null regions we need them
<enebo> but if we match? with them we don't?
<lopex> not for "match?"
<enebo> so only match? we don't need them
<enebo> At this point I only see two cases but I think that is because you told me to ignore the third case of eager
<lopex> enebo: we need them only for this case "a" =~ /a/
<lopex> no regions
<lopex> implicit zero group
<lopex> eager is not a third case
<enebo> ah so no capturing but we still need to know where match starts/stops
<lopex> yes!
<enebo> whew I knew we would get there!
<lopex> oh maybe I should have put it youre way
<enebo> I have a similar issue with my oj port
<enebo> the original C preallocs the stack and needs a final value
<lopex> enebo: we could number capturing groups from 1 though, but that would create more confusion
<enebo> but it does not use an extra stack element for final value
<lopex> like use always msaBegin / msaEnd
<lopex> but on jruby api side it would have more complex code
<enebo> however it still writes to stack element 0 even when stack is empty
<lopex> or hmm
<enebo> lopex: that would be confusing impl wise as well
<lopex> I recall that it had problems
<enebo> lopex: simplest impl would be to always have at least one region
<enebo> but then you are indirecting this simple case with a box
<lopex> but you would allocate
<enebo> yeah and allocate the box
<enebo> If Java had value objects we would not care
<lopex> I forgot why we store beg /end on rubyMatchData though
<enebo> heh
<lopex> it was 10 years ago though
<enebo> lopex: ok but I see we can save an alloc
<lopex> enebo: three alocs
<enebo> and if it is as simple as /a/ then every little opt probably helps
<lopex> region and two arrays
<enebo> oh true
<enebo> Almost makes me think we could add region to ThreadContext and pass it in
<lopex> lots of regexp dont have captures
<enebo> not that I am advocating that
<enebo> I have that thought a lot though :P
<lopex> we'd need to realloc when bumping up
<lopex> on new regexp which more captures
<lopex> like that ?
<enebo> lopex: release succeeded. not sure how long it will take to appear though
<enebo> lopex: yeah I guess we have to cope with growing region
<enebo> lopex: is region just two arrays of ints?
<lopex> MRI realllocs region
<lopex> yes
<enebo> we could model region as a single array
<lopex> it could be one
<enebo> ok same page
<enebo> anyways this is probably not reasonable
<enebo> pre-allocing a largish single primitive int array stored on context
<enebo> then all match/search would pass that in
<enebo> no alloc and no penalty for having it past assignment
<lopex> enebo: I think someone copied that usecnt from mri without a reason
<lopex> mri need it so that it can free it when preprocessed pattern is not original one
<lopex> needs
<lopex> enebo: I wonder how mri does on pattern that change on preprocessing
<enebo> lopex: ok well I could not understand what it was for and the behavior was all commented out (and MRI code)
<lopex> since it's constant malloc/free
<lopex> we just hit preprocessed cache
<enebo> yeah
<enebo> lopex: the pos code you originally linked from MRI we basically do already
<enebo> in rbStrOffset(post)
<lopex> enebo: it's different
<enebo> It is generic in calling nth but it still is sb optimizable
Puffball has quit [Ping timeout: 240 seconds]
<lopex> enebo: it's rb_str_offset
Puffball has joined #jruby
<lopex> I was talking about rb_str_subpos
<enebo> lopex: but is it really different in behavior?
<enebo> lopex: I guess I don't see how that would be faster than rbStrOffset
<lopex> enebo: well
<lopex> enebo: it's different :P
<enebo> lopex: yeah
<enebo> lopex: It must have improved something I guess
<enebo> lopex: for VALID utf-8 we can probably use nirvdrum logic and walk continuation bytes
<enebo> lopex: but I would do that generically and not just for this
<lopex> enebo: what walk ?
<enebo> lopex: walk to subpos
<enebo> lopex: if we want to start a 5 it is five codepoints right?
<enebo> for valid UTF8 we need not examine every byte
<lopex> and ?
<lopex> well not every byte
<lopex> only first one
<lopex> enebo: yeah, I promised a call graph for those length functions
<lopex> we need to add another length to encoding
claudiuinberlin has quit [Quit: Textual IRC Client: www.textualapp.com]
<enebo> lopex: yeah
<enebo> lopex: I still have never followed through on my assertion caching length on string would pay for itself
<lopex> no profile data
<enebo> lopex: There would be a tiny amount of cost for sb case
<lopex> so just guessing
<enebo> lopex: but it is so expensive in mbc case
<enebo> lopex: but yeah no evidence and it would not be faster for sure in sb case
<lopex> enebo: c deals with it from very beginning :P
<lopex> and most code ranges are sb
<enebo> I was told by MRI dev(s) that the reason length was never considered is because of lack of space in their struct
<lopex> or are they ?
<lopex> heh
<lopex> so why jruby doesnt do that ?
<enebo> lopex: well likely they are but killing perf for mbc at the cost of sb being a tiny bit slower might not be a good tradeoff
<lopex> because mri didnt have space for that
<enebo> we just have never tried
<enebo> and we ported their logic to some degree
<enebo> remember how long m17n took
Caerus has quit [Ping timeout: 268 seconds]
<lopex> it took me a year for string alone
<enebo> I think in my view of this was we were not going to deviate from MRI until we were confident we were correct
<lopex> yes
<lopex> that was my attitude as well
<enebo> so adding length could be done as an appendage/add-on but I also thought about using length field for CR as well
<lopex> and yet you have to be bug to bug compatible
<lopex> length for cr ?
<enebo> so negative values could indicate unknown/valid
<lopex> ah, I recall now
<enebo> arr.length with 7bit env is just length
<lopex> but it would have to go through centralized api
<enebo> err arr.length and length is 7bit
<lopex> otherwise you'd be lost
<enebo> well is is7bit() sure
<enebo> the methods would be super small and inline
<enebo> well I would lay money on that anyways
<enebo> lopex: joni is present!
<enebo> on maven
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<nirvdrum> Whoa. That's a lot of backreading. Is it worth me catching up?
<enebo> nirvdrum: not really. we are just removing regions so you can implement match_p without setting them up
<enebo> nirvdrum: most of that was not understanding why we had regions and two ints
<enebo> nirvdrum: one this of interest is rb_str_subpos exists for pos repositioning for match_p (and a couple of other things). Why they did not just use their normal char walking code is a mystery
<enebo> could just be some microopt we don't really understand
<nirvdrum> So how are you tracking match positions if you remove regions?
<lopex> enebo: ok
<enebo> nirvdrum: no but match? doesn't
<lopex> nirvdrum: via two int fields in matcher
<enebo> well we do but we don't actually need to for match?
<lopex> "match?"
<enebo> nirvdrum: actually that was what started part of that discussion was that when regions are disabled we still calc beg/end for the match even though match_p doesn't need that
<enebo> It is a tiny amount of logic though so not likely very important
<nirvdrum> Ahh.
<enebo> lopex: you saw that joni is on maven repos
<nirvdrum> I think the only thing we do differently for `match?` right now is avoid setting `$~`.
<lopex> enebo: I believe you
<lopex> enebo: I was testing with local copy
<enebo> lopex: just making sure you know :)
<lopex> nirvdrum: now you can force the region to be null
<lopex> using reg.matcherNoRegion
<nirvdrum> Nifty.
<enebo> nirvdrum: yesterday we talked about idea we can make a more specialized interp for match_p to shave some of this regionish logic out but it likely would not be a big gain
<enebo> nirvdrum: but since we have the method we can change that impl any time if we decide to play with it
<nirvdrum> Cool.
<lopex> nirvdrum: also there's a possibility to use separate interpreter that omits some group logic in joni
<nirvdrum> I suppose my next big mountain to climb is really figuring out what joni is doing.
<nirvdrum> We ported code from JRuby and slapped a boundary around the whole thing. It mostly works, but isn't ideal.
<lopex> like the group is not referred by \1 for example
<nirvdrum> But I never really know where to start.
<lopex> but groups largely group so it's invitable for the most part
<lopex> enebo: I wonder how much the semantics changes when you just (?..)
<lopex> er (?:..)
<nirvdrum> Not even remotely related to what you guys are talking about, but I'd really, really, really like to get basic regexp patterns without capture to be as fast as a substring search.
<lopex> enebo: we could have external array which says which groups are cpaturing
<enebo> lopex: oh so you mean we region (?:...) along with capturing regions as same data?
jrafanie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<enebo> lopex: if so then won't match? be broken
<lopex> nirvdrum: but even them there is a question what fast skip algo you use
<lopex> enebo: something like changing capturing to not capturing
<lopex> enebo: if not referred
<enebo> I am not quite following. how would we know if it was referred or not
<enebo> (?: ...) is never referred
<nirvdrum> We end up down paths with code like Regexp.new(Regexp.quote(pattern)) and pattern ends up being ',' or "\n".
<lopex> nirvdrum: like for example /foo/ you can already use build boyer moor map
<enebo> but () may be but unless you mean \1 then I don't get it
<lopex> nirvdrum: but regexp mostly are known at parse time
<lopex> so it's all tradeoffs
<nirvdrum> So the regexp is a single ASCII char, no modifiers, no bounds, no captures.
<nirvdrum> It ideally would work the same as indexOf.
<lopex> enebo: (?:...) is just not capturing
<enebo> AST generation does validate regexp so we could allow joni to know if it is smple string
<enebo> lopex: isn't it?
<enebo> I never remeber the syntax
<nirvdrum> enebo: In this case it'd be a runtime thing.
<enebo> I thought (?: was non-capturing group
<lopex> enebo: not capturing group
<lopex> enebo: I said otherwise ?
<enebo> lopex: I don't understand what you are asking or why now
<nirvdrum> Anyway. I didn't mean to derail your conversation.
<lopex> enebo: well we are in agreement, I'm confused
<enebo> nirvdrum: well AST can be marked as regexp but we could also mark it as no special chars
<enebo> lopex: you brought up non-capturing and then said something about tracking them separately from regions
<enebo> lopex: so I did not bring this up at all. I think I just did/do not understand what you meant before
<enebo> nirvdrum: at IR build time for us we could implement it as simple string search
<enebo> nirvdrum: or we could make joni have an optimized implementation for just that
<lopex> enebo: something like the capture can be disabled later on
<enebo> lopex: oh! like we can tell after we have run it that the code using it never requires backref so we remove the regions?
<lopex> enebo: yes, but just changing the interpreter loop
<enebo> It is unfortunate that $~ lives past current stack
<lopex> and some array of numbers
<nirvdrum> enebo: For match? you could just rewrite it to String#index(pattern) != 0. But I'd like to have it optimized for `match` as well. In that case you would need to know the match boundaries.
<lopex> nirvdrum: wrt tradeoffs, something like "looongstringbefore abcd" =~ /abcd/
<nirvdrum> I just think joni ends up going down a more complicated path.
<lopex> nirvdrum: joni will build a boyer moor map for abdc
<lopex> nirvdrum: and then fast skip to the interesting point before it even enters interpreter loop
<nirvdrum> lopex: This is where I shamefully admit I don't know what that is :-P
<lopex> nirvdrum: not all indexOfs have this
<lopex> nirvdrum: boyer moore ?
<nirvdrum> Yeah. I'll read up on it.
<lopex> nirvdrum: nowadays mri uses sunday search, it's a modification
<enebo> lopex: I think notion though at parse time knowing it can be something much simpler means not feeding it into the engine of joni
<lopex> nirvdrum: but in gist you just build a skip map given a string you search for
<nirvdrum> Really, my understanding of regexp engines is limited to foundational automata. The pumping lemma and such.
<lopex> it's just string searching algos
<enebo> even it joni is super fast all the code around getting to that fast execution is not free
<lopex> nirvdrum: and you advance faster given that map
<nirvdrum> Gotcha.
<lopex> but you have to build it first, so indexOF could do that on some length threshold
<enebo> I think I am on a different wavelength on optimizing that case now
<enebo> I don't think it should ahve anything to do with joni other than joni pointing out it is this simple case
<nirvdrum> lopex, enebo: Perhaps I'm advocating for making some of these operations encoding and code range aware.
<nirvdrum> But I say that naively not having looked at the internals. If it's just a byte machine that may not even matter.
<lopex> wtf is Horspool
<enebo> even if joni is faster at finding the match on a simple string all the shit we plow through before we hit that fast code is substantial
<nirvdrum> indeed.
<nirvdrum> And Graal isn't going to help us inline through it.
<enebo> for truffle no doubt would just make a very simple specialized path for it
<nirvdrum> TRegex may do that. I haven't played with it yet.
<enebo> for IR we could do it a couple of ways
<enebo> a ~= /a/ would still need to say it is executing match in stack trace so some sleight of hand
<nirvdrum> I could provide a specialization for these simple cases, but then I need to maintain my own equivalent to regions and such so the `MatchData` instances can be constructed properly. It's doable, but I certainly don't want my own ad hoc limited regexp engine.
<lopex> nirvdrum: https://www.youtube.com/watch?v=hj4VmvyqbKY benchmarking talk, truffleruby is there
<lopex> in case you havent seen that
<enebo> nirvdrum: but literally only for cases like /\n/ where match would be pretty simple
<enebo> start/end is trivial in simple substring match
<nirvdrum> lopex: I haven't. But I'll check that out. I believe Chris has worked with Edd in the past.
<lopex> though I always saw degradations before steady state was riched on hotspot
<lopex> reached even
<nirvdrum> enebo: I ended up down this chain of thought when encountering this snippet from csv.rb: parse.sub!(@parsers[:line_end], "")
<enebo> oh yeah and in fact this would be more complicated for us since sub! is a call
<nirvdrum> Which basically is the same thing as String#chomp, but looks up a regexp from a map and uses that as an argument to String#sub!
<lopex> beauty
<enebo> so we would need to pass in a type which had match/whatever but was not specially a RubyRegexp
<nirvdrum> I think it was written this way so you could use something other than "\n" to demarcate different rows. But, I doubt anyone ever really does that.
<enebo> yeah I doubt that as well but \r\n may be possible perhaps
<lopex> enebo, nirvdrum: this is what onigmo uses to match newlines in almost every opcode related https://github.com/k-takata/Onigmo/blob/master/regexec.c#L68
<lopex> can it get any worse ?
<lopex> it's very hot code
<lopex> we dont have this yet
<enebo> lopex: who knows
<nirvdrum> lopex: So if it's not CRLF it call ONIGENC_IS_MBC_NEWLINE? What's a MBC newline?
sgeorge has quit [Remote host closed the connection]
<lopex> enebo: our isNewLine on encoding
<lopex> nirvdrum: ^^
<nirvdrum> Okay.
<lopex> but you get that option check
<nirvdrum> I guess I'd really like to see something that checks the last byte for LF if the encoding is ASCII-compatible.
sgeorge has joined #jruby
sgeorge has quit [Ping timeout: 264 seconds]
ahorek has joined #jruby
ahorek has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
sgeorge has joined #jruby
sgeorge has quit [Remote host closed the connection]
sgeorge has joined #jruby
sgeorge has quit [Ping timeout: 244 seconds]