Liothen has quit [Quit: The Dogmatic Law of Shadowsong]
Liothen has joined #jruby
jmalves has joined #jruby
shellac has joined #jruby
drbobbeaty has joined #jruby
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
shellac has quit [Quit: Computer has gone to sleep.]
shellac has joined #jruby
shellac has quit [Quit: ["Textual IRC Client: www.textualapp.com"]]
shellac has joined #jruby
jmalves has quit [Ping timeout: 260 seconds]
jmalves has joined #jruby
drbobbeaty has joined #jruby
bbrowning_away is now known as bbrowning
bartsyts has joined #jruby
bartsyts has quit [Client Quit]
nelsnelson has joined #jruby
shellac has quit [Ping timeout: 252 seconds]
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
nels has joined #jruby
nelsnelson has quit [Ping timeout: 260 seconds]
enebo has quit [Ping timeout: 240 seconds]
shellac has joined #jruby
xardion has quit [Quit: leaving]
xardion has joined #jruby
enebo has joined #jruby
nels has quit [Read error: Connection reset by peer]
nelsnelson has joined #jruby
shellac has quit [Ping timeout: 260 seconds]
Eiam has joined #jruby
claudiuinberlin has joined #jruby
<headius> hmmmm
<headius> ChrisBr: you around?
shellac has joined #jruby
<kares> enebo: unfortunately I didn't have the time to look into jline
<kares> I mean I did last week - setup up a hook to restore terminal from runtime finalizer
<headius> we need to figure out if rb-readline would be feasible to ship
claudiuinberlin has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<kares> but that doesn't somehow work and I tried restoring by hand in irb (unwrapping ancalling Java directly) but wout luck
<kares> feels like I am missing smt, hopefully I will get to it this week and see
<headius> hmm
<headius> many commits in jline between those versions?
<headius> maybe we can eyeball it and see something likely to cause problems with tty
<kares> headius: which versions?
<headius> jline versions
<kares> oh you pbly mean those two jruby-readline ships
<headius> the one we ship and the newest
<headius> yeah
<kares> well many but they both behave kind of the same with regard to restoring stty
<kares> they both don'tits just that the old one does not set -icrnl
<kares> while the newer sets that
<kares> anyway I think we can get it working ... eventually
<enebo> kares: it is ok. It seems since that stty for -icrnl is in both versions something must just no longer be called somehow in new one
<enebo> the opposite case maybe is possible too that the old version somehow never hits that line of code but???
<kares> yeah that might be - will need to check logs
<kares> anyways what if we did not change any ssty setting at all?
<kares> not sure MRI does any of that
<kares> e.g. we do not need reverse search on ^S
<kares> we should expect to start with a fairly reasonable stty, right?
<kares> progress: ok so the restore works but now it restores to a fairly "bare" state - removes pretty much all of stty
<headius> heh
<headius> I will note that when I switched back to older readline, starting and quitting irb reset my stty more than I expected
<headius> back to what it was originally
<headius> I mean I thought it changed more than one setting
<kares> headius: okay but do you feel like its doing any setting change, it seems readline doesn't touch stty
<kares> just leaves it pretty much as is
<lopex> why open addressing is so much more fragile
<lopex> is it thresholds for realocs ?
<lopex> any race should always be there in old impl
<kares> ah okay I see rl_deprep_terminal in readline.c
<lopex> enebo: maybe the value array could be populated by nils too ?
<lopex> ok, I seem to jump ahead with that conclusion
<enebo> lopex: we proabbly replace the box for cachedentry but save quickly as a local so we see old version of entry...vs new impl where we ask twice from same primitive array
<lopex> enebo: realocing/rehashing old impl is just as dangerous though
<lopex> enebo: so it's all by luck really
<lopex> and lower indirection as headius commented
<enebo> lopex: yeah well no one says old impl did not have issues either. It probably does explode there
<enebo> lopex: it is more that we see more errors with new impl and even if not more they are different and old code in libraries are seeing new exploding behavior
<lopex> yeah, I understand
<lopex> enebo: but it might occur on different load thresholds too
<lopex> enebo: the old impl has load of much greater than 1
<lopex> so lots of collisions until realloced
<lopex> like > 5
<lopex> per bucket
<lopex> er, per index
<enebo> lopex: sure lots of properties are different and behavior when concurrent is different. It is to be expected.
<lopex> yeah, I understand that
shellac has quit [Quit: Computer has gone to sleep.]
<enebo> ChrisBr: we love the work you did so we will get it in again...we just are being more cautious for now
<lopex> and it's client code fault really
<headius> if only we didn't have users
<lopex> what kind of hashing TR uses ?
<enebo> lopex: good question
<lopex> looking
<lopex> yeah, I recall packet strategy too
<lopex> *packed
<lopex> ok, isBucketHash
<lopex> though they might cache things in ast
<lopex> though I'm confused about nextInLookup and nextInSequence
<lopex> enebo: anyways, ordered rpi3b+
<lopex> which isnt too bad
<enebo> lopex: I have raspi 2 and use it for pihole
shellac has joined #jruby
<lopex> enebo: does pihole work as expected ?
<enebo> lopex: well I have never disabled my ad blockers
<enebo> lopex: so I am not 100% sure
<enebo> lopex: but stats show quite a few blocked sites
<lopex> enebo: doest it have bind enabled too ?
<lopex> *does
<enebo> it is a DNS
<lopex> yeah
<enebo> so yes
<lopex> ah
<lopex> right
<lopex> by definition :P
<lopex> enebo: and it's configured through dhcp for the whole subnet ?
<lopex> sometimes I get things backwards :P
<lopex> enebo: I assumed it's via proxy
<lopex> enebo: what's the avg load when you surf the net ?
<enebo> ah yeah I don't know but I would assume low...I don't know how filtering really works but DNS is not a powerhouse of a process
<enebo> and I only have 2 machines using it as a DNS
<lopex> enebo: I assume it scans the html altogether right ?
<lopex> or just blocks the requests ?
<lopex> sometimes you neeed to remove things just like ublock does
<enebo> no I think it just has whitelists for hosts...but I have not read up on it in a long time...I just spent like 10 months with a todo item to change the network it was on which required ethernet and a hdmi connection
<lopex> hmm, ublock lists contain tree selectors too
<lopex> though it's more css based syntax
<lopex> or is it just css
shellac has quit [Quit: Computer has gone to sleep.]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (revert-5412-delay_open_hash:be2e332 by Charles Oliver Nutter): The build failed. (https://travis-ci.org/jruby/jruby/builds/451041735)
travis-ci has left #jruby [#jruby]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (master:44c1ddc by Charles Oliver Nutter): The build has errored. (https://travis-ci.org/jruby/jruby/builds/451040851)
travis-ci has left #jruby [#jruby]
enebo has quit [Ping timeout: 268 seconds]
enebo has joined #jruby
claudiuinberlin has joined #jruby
lanceball has joined #jruby
enebo has quit [Ping timeout: 272 seconds]
enebo has joined #jruby
claudiuinberlin has quit [Quit: Textual IRC Client: www.textualapp.com]
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<lopex> headius: so what's then with Enumerable#to_h ?
<lopex> oh, we dont know upfront the size
<enebo> lopex: ah thanks for that
<lopex> enebo: Array#to_h is new that's why I noticed
<enebo> lopex: we noticed Array#to_h existed since 2.1 but you only just added it...makes sense all was ok because we have Enumerable version
<enebo> lopex: so not new?
<lopex> enebo: I addedit exatcly because it's in mri
<lopex> since 2.2 ?
<enebo> 2,1
<lopex> er, 2.1
<enebo> yeah
<lopex> enebo: and now people prefer to_h over Hash[]
<lopex> which is reasonable
<lopex> enebo: I'd even go for new RubyHash(context.runtime, realLength) for bigger hash
<lopex> er, the condition is wrong
<lopex> enebo: do I see it right ?
<enebo> lopex: realLength of 0 causes divide by zero error
<lopex> RubyHash hash = useSmallHash ? RubyHash.newHash(runtime) : RubyHash.newSmallHash(runtime);
<enebo> if you merge it with a non-empty hash
<enebo> lopex: but that fix just uses a different constructor based on a helper
<lopex> enebo: it's the other way round
<enebo> lopex: at some point we need to have a more intelligent constructor in RubyHash itself so we do not duplicate logic like this....but we are working to have stable release
<enebo> lopex: yeah I noticed that already
<enebo> lopex: it has been fixed
<enebo> lopex: interestingly the helper it came from did that too
<lopex> enebo: but I'd add that length helper anyways
<lopex> enebo: it will rehash unnecessarily
<enebo> which length helper?
<lopex> enebo: newHash with length
<enebo> possibly realLength / 2 right?
<lopex> the main purpose of to_h on Array is that we know the size
<lopex> no
<lopex> length * the load factor if at all
<enebo> oh because it won't perfect hash?
<lopex> no, because it will reallocate while building that hash
<headius> hey didn't know we were chatting here :-)
<lopex> headius: we should use that length on non small hash
<enebo> lopex: I am only confused because realLength is /2 key/value pairs so I am curious about how many buckets you think it should have
<lopex> enebo: the hash will be the size of that array right ?
<headius> small hash []= will not resize
<headius> or rehash
<headius> that's why you use those methods with a "small" hash
<enebo> he is talking about large
<headius> large won't use small hash...I'm confused I guess?
<lopex> but on large it will rehash multiple times
<enebo> no he just means large constructor can better setup the right number of buckets since it knows the size
<headius> large gets constructed with something like 11 buckets by default
<lopex> YES
<headius> sure
<headius> more accurate sizing would be better, sure
<lopex> it is worse than array resizing
<headius> based on load factor I guess?
<headius> the small hash stuff could probably be done automatically too
<headius> like if buckets.length = 1 && size <= 10 don't rehash
<headius> above that it calculates load factor as normal
<lopex> sure
<headius> sure
<lopex> for large size * load factor right ?
<headius> well / but yeah
<lopex> er
<headius> newHashWishSize(size) { buckets = size / LOAD_FACTOR ... } right?
<lopex> depending what we mean by load factor
<headius> With
<headius> well this is a constructor that you want to not rehash up to N entries
<lopex> java defaults to something like 0.7 ?
<headius> yeah 0.75
<enebo> size = realSize / 2?
<lopex> but now the load factor is > 1 if it's from my old code
<lopex> by default
<headius> private static final int ST_DEFAULT_MAX_DENSITY = 5;
<lopex> yep
<enebo> realSize / 2 * lf
<lopex> horrendous
<headius> so is this basically saying a load factor of 5?
<lopex> headius: which means as many collision before resizing
<lopex> no
<lopex> :P
<enebo> heh
<headius> size / table.length > ST_DEFAULT_MAX_DENSITY
<lopex> it's from old mri code
<headius> so if the actual size is > 5x the length of the bucket table it proceeds to resize
<headius> so that would mean with an even distribution you'd have up to 5 chained entries in each bucket
<lopex> we should make it even lower that what in java since our equals is costlier
<lopex> yeah
<headius> and worst case N chained entries in one bucket for N size hash
<headius> load factor of 5 seems high :-)
<lopex> I'd opt for 0.5 even :P
<headius> right
<headius> for this design
<headius> but we still intend to land ChrisBr hash which does not have this concept exactly
<lopex> yeah
<headius> so
<lopex> so no biggier
<headius> I guess we leave this for 9.2.1
<lopex> *biggie
<lopex> yeah
<lopex> headius, enebo so there was confustion between that load factor, equals costs and hash caching
<lopex> all three
<lopex> in our discussions months earlier
<headius> sure
<headius> enebo: looks good, I'm going to merge
<enebo> headius: yeah cool
<headius> GO GO GO
<headius> I'm sure we won't find any more bugs!
<lopex> I want to break that match data thing again
<headius> break it!
<lopex> well, already done before
<lopex> I can revert the revert
<lopex> we dont reuse the matcher in so many cases now
travis-ci has joined #jruby
<travis-ci> jruby/jruby (master:d4e5bb6 by Charles Oliver Nutter): The build has errored. (https://travis-ci.org/jruby/jruby/builds/451111118)
travis-ci has left #jruby [#jruby]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (master:d4e5bb6 by Charles Oliver Nutter): The build passed. (https://travis-ci.org/jruby/jruby/builds/451111118)
travis-ci has left #jruby [#jruby]
rdubya has quit [Ping timeout: 260 seconds]
jmalves_ has joined #jruby
jmalves has quit [Read error: Connection reset by peer]
shellac has joined #jruby