rdubya has joined #jruby
rdubya has quit [Ping timeout: 252 seconds]
shellac has quit [Quit: Computer has gone to sleep.]
shellac has joined #jruby
shellac has quit [Quit: Computer has gone to sleep.]
rdubya has joined #jruby
rdubya has quit [Ping timeout: 260 seconds]
enebo has quit [Read error: Connection reset by peer]
enebo has joined #jruby
enebo has quit [Read error: Connection reset by peer]
enebo has joined #jruby
enebo has quit [Read error: Connection reset by peer]
enebo has joined #jruby
emerson has quit [Remote host closed the connection]
bga57 has quit [Quit: Leaving.]
bga57 has joined #jruby
rdubya has joined #jruby
rdubya has quit [Ping timeout: 252 seconds]
_whitelogger has joined #jruby
_whitelogger has joined #jruby
shellac has joined #jruby
shellac has quit [Quit: Computer has gone to sleep.]
shellac has joined #jruby
shellac has quit [Quit: Computer has gone to sleep.]
shellac has joined #jruby
shellac has quit [Client Quit]
_whitelogger has joined #jruby
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
drbobbeaty has joined #jruby
rdubya has joined #jruby
nelsnelson has quit [Quit: nelsnelson]
enebo has quit [Read error: Connection reset by peer]
slyphon has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
enebo has joined #jruby
slyphon has joined #jruby
<headius> another day, another last-minute fix
slyphon has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<lopex> like approaching an asymptote
<enebo> zenos release paradox
<enebo> xenos? spelling
<lopex> like xalan ?
<lopex> and xerces was like gzeerces ?
<enebo> well it is zeno's ... so I happened to be right
<lopex> wrt sax this saved one of our project https://github.com/jycr/annotation-xpath-sax
<lopex> you use xpath subset and get sax performance
<lopex> it generates state machine under the hood
<enebo> lopex: neat
<lopex> and it's all from system I IBM MQ message
<lopex> *mqseries
<lopex> we even peek at first bytes to determine if it's zip or gzip, since producers are not consistent
<lopex> enebo: hundreds of megs of compressed xml
<lopex> and all goes lazily on 100Xmx java process
<enebo> you do some big data stuff now on ibm
<lopex> no
<enebo> or have you been for a long time
<lopex> just nation wide data
<lopex> and it's not me, I just made this prototype, since the original was written in c++ and used dom
<lopex> we went from days to minutes with this
<lopex> it could have easily be dome with jruby I think
<lopex> *done
<lopex> but we cared about smallest footprint since this subsystem has limited cpu time
<lopex> and mem
travis-ci has joined #jruby
<travis-ci> jruby/jruby (master:21ed843 by Thomas E. Enebo): The build was broken. (https://travis-ci.org/jruby/jruby/builds/451424376)
travis-ci has left #jruby [#jruby]
slyphon has joined #jruby
slyphon has quit [Client Quit]
Osho has quit [Ping timeout: 240 seconds]
Osho has joined #jruby
travis-ci has joined #jruby
<travis-ci> jruby/jruby (9.2.1.0:21ed843 by Thomas E. Enebo): The build passed. (https://travis-ci.org/jruby/jruby/builds/451424447)
travis-ci has left #jruby [#jruby]
emerson has joined #jruby
xardion has quit [Remote host closed the connection]
xardion has joined #jruby
<headius> woot
<lopex> the pass has been built
<ChrisBr> headius: hee, saw the revert
<ChrisBr> is there anything you have in mind for making the concurrent access work?
rdubya has quit [Ping timeout: 276 seconds]
travis-ci has joined #jruby
<travis-ci> jruby/jruby (9.2.1.0:7b14404 by Charles Oliver Nutter): The build passed. (https://travis-ci.org/jruby/jruby/builds/451439387)
travis-ci has left #jruby [#jruby]
rdubya has joined #jruby
<headius> but of course
<headius> ChrisBr: I think we should be able to make at least parts of updating atomic
<headius> packing start and end into a single long might be a first change I make, so both can be updated at once
<headius> if we can get it down to just a couple pieces of state the atomic logic shouldn't be too complex
<lopex> long update is guaranteed to be atomic on 64bit ?
<lopex> on 32 it's not
<headius> on 64bit native I believe so
<headius> on 32 it is guaranteed if you are using atomics, I believe
<headius> "Writes and reads of volatile long and double values are always atomic"
<headius> so not even atomic
<headius> just volatile
<lopex> headius: there's similar trick when passing cr and length both in long for String
<lopex> but that's just so we can return multiple values at once
<headius> right
<headius> we can be doing this more places
<headius> I'm still of a mind that we could make Array, String, and Hash largely thread-safe with atomics
<headius> at least internally consistent
<lopex> are java.util.concurrent.atomic intrinsified and possibly more aggressively allocation eliminated ?
subbu is now known as subbu|afk
<lopex> because it's just semantics right ?
<headius> that I'm not sure about
<headius> I generally end up using either atomic updaters or unsafe methods directly because of the wrapper
<headius> we definitely don't want a bunch of Atomic* objects for all of these data fields
<headius> they might fold away fine with good escape analysis but if they have to stay in memory they have to stay in memory
shellac has joined #jruby
<ChrisBr> headius: what do you mean with single long?
<ChrisBr> anything I can help out ?
<headius> start and end are ints, so they could be high/low 4 bytes of a Java long
<headius> then we'd be able to update start and end atomically with a single write
<headius> the atomic sequence for such an update would be like...
<ChrisBr> hmm right
<headius> get 64-bit value, split out 32-bit values, fiddle with start and end, re-combine as 64-bit, confirm it hasn't changed before updating (compare and swap)
<ChrisBr> hm smart
<ChrisBr> so one operation less then?
<headius> one less operation and no chance of someone else reading half of it
<headius> I think the NPE we were seeing was likely because someone got their start but someone else's end
<headius> or vice versa
<headius> someone = thread
<ChrisBr> ahh
<headius> ChrisBr: a short-term thing we could start to do would be to encapsulate as much as possible all accesses to those fields
<headius> you mostly have that in the internal* impls
<headius> but anything we can do to make it smaller
<headius> we want as few reads/writes for everything, ideally down to a point where we can guarantee we can have valid start/end/bins/entries in hand
subbu|afk is now known as subbu
<lopex> with array memory model you might hope for it
<headius> right
<headius> I have not figured out all the bits we need to do but this structure seems easier to make thread-safe than the chained buckets
<headius> chained buckets at a minimum would have required multiple atomic updates or lock pairs
<headius> messy
<lopex> chained buckets give an illusion since the're more consistent internally themselves
<lopex> is that a right way to put it ?
<ChrisBr> ok, so if you have a more concrete way in mind let me know. I'm a little lost how to go forward...
<headius> a good way to put it in fact
<headius> lopex: frequently we'd see concurrency failure in Hash as a cycle in one of the buckets rather than a crash
<headius> arguably that's worse than a crash
<lopex> yeah
<lopex> the illusion being hash, key being final, and no arrays right ?
<headius> chained buckets seemed more robust concurrency-wise but it has also had more attention to properly ordering reads and writes
<headius> yeah
<headius> well that and the buckets being *actually* separate memory locations
<headius> so multiple threads have to get lucky and either be changing bucket array size or both working on same slot
<lopex> but double linked list makes it even worse
<headius> open addressing has no buckets so all threads are working against the same array
<lopex> so realloc is more critical
<headius> we need a better way to separate storage strategy from methods
<headius> like specialized arrays, alternate hash impl strategies, etc
<lopex> headius: I think with that fact encoding versions we might solve quote a bit actually
<lopex> headius: bad thing is that there's lots of by reference comparison spread throughout the code
<lopex> but
<lopex> if we know cr is valid we may pass fast encoding for matching
<lopex> if not, then we pass approximate version
<lopex> so that fast version doesnt leak back into jruby runtime
<lopex> in case you got a minute :P
<lopex> so, we might get two free lunches
<lopex> cr known / valid, we pass fast version, cr unknown invalid, we pass approx version
<lopex> we wont blow in broken strings, adn we will be faster for gneeral case
<lopex> otherwise, can we statically find 'by reference' encoding comparisons ?
<lopex> they dont end up Object anywhere I would hope
shellac has quit [Quit: Computer has gone to sleep.]
<headius> The reference compares shouldn't hurt though right? Just prevent some optimizations
subbu is now known as subbu|lunch
<lopex> the fast encodings will be different instances
<lopex> so there two options
<lopex> simpler: pass faster encoding from core methods to the actual routines for length etc, and always keep base encoding in String instances for example
<lopex> second, keep specialized encodings in core objects, but provide safe comparisons
<lopex> like enc_a.getBase() == anc_b.getBase()
<lopex> or
<lopex> er, that's it
<lopex> headius: is that clear ?
ChanServ changed the topic of #jruby to: Get 9.2.1.0! http://jruby.org/ | http://wiki.jruby.org | http://logs.jruby.org/jruby/ | http://bugs.jruby.org | Paste at http://gist.github.com
<headius> hmmm
<headius> that seems ok to me I think
<headius> updates might trigger recalc of encoding but they do that now
<headius> I mean the second option seems ok
<headius> I agree the equality compares should go away in favor of something a little less type-system brittle
<headius> instanceof would be slightly less gross, proper comparison methods would be much cleaner
<headius> like an optimized UTF8 encoding that knows it's all 7-bit ASCII could claim it's fine being treated as US-ASCII
<headius> that's a contrived example but you know what I mean
<lopex> yep
<lopex> there's a lot to win specializing length, mbcToCode etc
<lopex> and potentially jumping all 4 utf8 tables doesnt do any good too for length
<lopex> lots of cache pollution
<lopex> but I'm more on specializing encodings for valid strings
<lopex> and of course utf8 is the elephant in the room
<headius> right I can see the benefits
<headius> it would be worth pondering what a UTF-8 specific future might look like too
<headius> I think it's all but decided that Ruby 3 is going to end up doing what we said they should have done all alone
<headius> along
<headius> they've already made UTF-8 default internal on all platforms
<headius> Place your betas on first 9.2.1 bug report
<headius> bets
<headius> hah
<headius> I told enebo it would be 30 min about 5 min go
<headius> ago
<lopex> still
<lopex> having faster utf8 walks will pay off
<lopex> and it's not just length
<lopex> headius: first I'd try simpler thing, pass fast instances to joni as a test
<headius> seems reasonable
<lopex> so we wont pollute jruby runtime with them
<lopex> and this joni issue is ideal candidate
<enebo> lopex: don't forget caching char length would help for fast paths
<lopex> enebo, headius and especially this https://github.com/jruby/joni/issues/17
<lopex> we will have much more freedom
<lopex> enebo: I wont change RubyString
<lopex> at first
<lopex> enebo: but you also see the benefits right ?
<enebo> I think so yeah
<lopex> length etc are already megemoprhic, so there nothing to loose
<enebo> my suggestion I guess is another opt layer on top...if you know char length it is valid and then also obvious whether it is 7bit or not
<lopex> enebo: I guess, I more on another aspect of this
<lopex> enebo: two utf8 encodings instances
<lopex> one for approx, and one completely unsafe
<lopex> approx will never return negative values
<lopex> if char is broken, return 1
<lopex> just to advance
<lopex> most of strings will use fast one anyways
<lopex> and not, slow and blowing as we do now
<lopex> enebo: right?
<enebo> lopex: seems reasonable
subbu|lunch is now known as subbu
<lopex> enebo: and for approx one, I'd do the validation with bitwise algos
<lopex> this is insane
<headius> yeah I'm sure that's great for CPU caches
<lopex> also, bitpop etc are intrinsified right ?
<lopex> or bitcount, whatever it's called
<headius> they should be
<lopex> we dont have c freedom though
<enebo> lopex: this reminds me the oj port has some table lookups for characters and I need to make it use our stuff
<enebo> lopex: but I won't look at doing that until it is green
<lopex> enebo: yeah, I even looked at oj c code
<lopex> enebo: so much encoding duplication too
<lopex> though, funny enough, mri could match against invalid strings
<lopex> at this point
<lopex> onigmo will behave correctly
<lopex> so there's two lost opportunities
<lopex> one, matching with invalid encoding (which would work on mri now)
<lopex> two, mri always goes slowest path possible wrt length logic
<lopex> how did I miss the release ?
<enebo> lopex: especially when I updated the topic in this channel as you were talking
<lopex> yeah
<enebo> lopex: and then headius making a bet if we get issue reported in 30 minutes
<headius> I think I lost
<lopex> I know why, headius responded just after the topic have been changes
<lopex> *changed
<enebo> headius: it is tough to know how fast someone will jump on the horse
<headius> yeah
<enebo> This was more likely than not since it was such a long time
<headius> enebo: I'm goingto make open addressing branch
<enebo> ok
<headius> we'll work out with ChrisBr how to collab
<lopex> the bet is still on ?
<headius> I'm about 15 min over my 30 bet
<enebo> unless we change it to metric minutes
<lopex> or disallow issues!
<enebo> lopex: sounds like a star trek life hack
<lopex> enebo: what was that ?
<enebo> lopex: just Kirk always found the novel way around the situation (well not really he usually just did some horrible fighting scene)
<lopex> enebo: like plot armor ?
<headius> jruby date 2394658.5
<ChrisBr> headius: enebo: congrats on the release
<enebo> lopex: plot armor is always in play
<enebo> ChrisBr: thanks...it is good to have it done
<headius> ChrisBr: thank you!
<enebo> we just are wondering if we will see anything needing immediate attention now
<enebo> so far issue tracker is quiet...too quiet
<lopex> well, still having that bundler issue on windows
<lopex> but only for update
<lopex> install works
<headius> ChrisBr: hey something I could use your help with short term
<headius> so the old concept of a "small" hash is no longer there because it didn't fit open addressing
<headius> but I notice in places where we were creating a small hash before we're now creating one with room for 8 entries minimum
<headius> there are tons of 1- and 2-element hashes obviously for kwargs and such...perhaps we can reinstate some concept of "small" that starts with a smaller entries array?
<headius> I'm going to look at combining end and start since that's a little fiddlty
<headius> fiddly
<lopex> or even specialize it like you did for arrays ?
jmalves has joined #jruby
jmalves_ has quit [Read error: Connection reset by peer]
<lopex> but then, you'd have to use that same interface consistently
<headius> lopex: absolutely
<headius> well so this is one thing Truffle gets those languages
<headius> they have this amorphous "DynamicObject" that they can mess with internally and then they can apply any storage strategy to it
<lopex> yeah, I recall
<lopex> er, remember
<headius> I'm starting to get there with RubyObject specialization
<headius> that plus VariableTableManager can be expanded to allow per-allocation shapes
<headius> the whole thing could be merged with Struct and Array to specialize those the same way
<headius> and then Hash could come along, although it's a bit more complicated obviously
<lopex> yeah, hashes could in principle be optimized the same way objects are
<lopex> and ivars
<lopex> hidden hashes ?
shellac has joined #jruby
jmalves has quit [Ping timeout: 252 seconds]
<headius> there
<headius> there's the PR to restore open addressing
<headius> for now I suppose we can work with side-PRs to this branch, ChrisBr
<headius> have chatted a bit with enebo about opening up commit access to non-release branches
<headius> I have been emphasizing PRs generally
<headius> would also like to chat about the "generation" field
<headius> it seems to be used only for "clear" but a generation field could simplify some concurrency stuff
<headius> i.e. check generation, prepare modifications, CAS generation and mods
<headius> wish JDK had some transactional memory
<lopex> I wonder if EMPTY_BIN could be zero values
<lopex> *valued
<lopex> we loose one index then only right ?
<headius> every bit helps
<lopex> headius: also, in st.c theres st_init_table_with_size
<lopex> we want that for to_h and Hash#[]
<lopex> at least
<headius> yeah that's what I'm thinking of above
<headius> and methods to create a perfect Hash for N elements
<lopex> perfect in what sense ?
<lopex> like offline like gperf does ?
<lopex> ah
<lopex> I get it
<lopex> you'd need the hash values and some size right ?
travis-ci has joined #jruby
<travis-ci> jruby/jruby (restore_open_hash:494d1c7 by Charles Oliver Nutter): The build passed. (https://travis-ci.org/jruby/jruby/builds/451589089)
travis-ci has left #jruby [#jruby]
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<lopex> headius: but why though ?
<lopex> you can always slide like in cockoo hashing
<lopex> or am I missing something
<headius> lopex: at least a size
<headius> not necessary Perfect but I mean perfect sizing
<headius> perfect hashing is kinda orthogonal to open addressing?
<lopex> headius: from what I know perfect is guaranteed absence of collisions for given keyset
<lopex> just like gperf does
<headius> right
<lopex> so hashing function becomes a variable
<lopex> and size of course
<headius> so I guess that's not totally orthogonal...open addressing still needs to hunt for the next bin if there's a collision
<lopex> two free variables
<headius> pushed a first commit to open hash work
<lopex> yeah, and cuckoo is just a strategy of collision
<headius> mostly just reducing access to what should be invariant state
<headius> but I should be able to pack start and end now
<lopex> ah, the packing
<lopex> the sign is irrelevant
<lopex> or is it ?
<headius> if we knew start and end were somewhat smaller in range we might be able to pack them tighter
<headius> yeah sign can be dropped
<headius> so really 31 bits each
<headius> hmm
<headius> I just realized one problem with open addressing on JDK...we can't fit as many entries as we could before
<lopex> yeah plus that long mask comes in play
<headius> not that anyone was actually able to insert 2B entries into the old Hash impl but now they'd only be able to do 1B
<headius> I'll wait for someone to file a bug.
<lopex> yeah, in buckets
<lopex> righto
<lopex> havent thought about it too
<headius> yeah
<headius> I wish we could assume Java 9+ and use varhandles
<lopex> well, we could fallback to some kind of tree based vector
<lopex> er, tree set
<headius> or make our own array impl with Unsafe :-D
<lopex> whatever
<headius> though I guess no matter what we still need GC involved
<headius> can't start shoving object references into a native array
<headius> yeah this comes back to wanting to be able to swap strategies for storage on a per-instance basis
<lopex> yeah
<headius> if you are using some enormous hash this is probalby not the optimal design
<lopex> no plans for expanding arrays for jdk ?
<lopex> in whatever annoying ways
<lopex> and given load factor the array will be even larger
<lopex> since load factor also applies to open addressing
<lopex> headius: we're in the process to convert large system I db2 for utf8 now here
shellac has quit [Quit: Computer has gone to sleep.]
<lopex> headius: since till now, name, forname were like char(20) and char(40)
shellac has joined #jruby
<lopex> and I realized nobody here knows how utf8 works
<lopex> so we scanned the db with max(byte_length(col) - char_length(col)) for every non clob string col in the db
<lopex> the results were staggering
<lopex> in real wolrd data there are some trash values like ąąąąąą in name etc
<lopex> so you'd have to double the col size
<lopex> so then there is varchar with alloc
<lopex> so varchar(max) with fixed width reserved alloc size
<lopex> headius: 99% of polish fornames fits in 16 bytes
<lopex> so I wonder
<lopex> how smart db has to be to be able to index varchar with fixed col width (plus additional max stored outside the table)
<lopex> and all that with utf8
<lopex> headius: ^^
<headius> oops, seems I bootched something
<headius> there have been array improvement plans for years but it's a hard problem
<lopex> you mean for dbs ?
<headius> well I mean size > 31b, volatile/atomic access,
<headius> etc
<lopex> ah
<lopex> I can imagine two main strategies for indexing utf8, one is rope based the other is char index based
travis-ci has joined #jruby
<travis-ci> jruby/jruby (restore_open_hash:173b05f by Charles Oliver Nutter): The build was broken. (https://travis-ci.org/jruby/jruby/builds/451606743)
travis-ci has left #jruby [#jruby]
<lopex> but what strikes me the most is that most of IBM based envs in europe i still pinned down to ebcdic
<lopex> headius: did I tell you that I was able to read ebcdic from as/400 directly from ebcdic with bad data ?
<lopex> with jruby
<lopex> plain ebcdic files from the system going through jcodings transcoding
<headius> ahh there it is
<headius> found my bug
<headius> lopex: I assume it was completely garbled
<headius> I almost want to add EBCDIC support just to say we have it
<lopex> headius: no, it was a binary record
<headius> and because I work at IBM now
<headius> "now"
<lopex> so I saw some fields in text
<lopex> hah
<lopex> headius: IBM for ages had specialized filesystem features
<lopex> and many of them are flat systems
<headius> yeah
<lopex> indexed system files
<headius> that was always a special feature of IBM hardware
<lopex> yeah
<headius> storage IO would never be your bottleneck
<lopex> yeah
<headius> weird filesystems though
<lopex> headius: they always had a joke in their heads, about how many refrigerators you'd be able to transfer with lamborgini and atrack
<lopex> *truck
jeremyevans has joined #jruby
<lopex> but IO is their strongest thing
<lopex> and timi
<lopex> and it's from the 80ies
<lopex> headius: back in 15 years ago or something, our dba's were mostly operating on disk tracks and sectors to optimally lay tablespaces for db2
<jeremyevans> jruby-dist-9.2.1.0-bin.tar.gz ships with some gems (e.g. rake), but without the gem lib/exe files. so jruby -S rake results in LoadError: no such file to load -- /usr/local/jruby/lib/ruby/gems/shared/gems/rake-12.3.0/exe/rake
<lopex> and defrag was a manual thing
<headius> jeremyevans: NOOO
<headius> enebo: NOOO
<lopex> headius: more thatn 30 mins
<lopex> no excuses
<headius> enebo: I thought you verified against dist tarball
<headius> maybe did not notice because rails immediately installs rake
<enebo> I only test installer on windows
<headius> enebo: which should be based on dist zip right
<headius> enebo: 9.2.2? :-D
<jeremyevans> headius: so far I only tested by updating the OpenBSD port
<enebo> Probably I don't remember
<enebo> jruby-dist should be jruby-bin and I test that on linux
<headius> jeremyevans: thank you...we've had a number of issues over the past week that made me realize we need some verification CI against the dist tarballs
<jeremyevans> but tar ztf shows files like MIT-LICENSE and such, but no lib/exe directories in the gem folder
<headius> jeremyevans: can you toss this into a bug please
<headius> I'm walking out the door
<jeremyevans> headius: sure, just wanted to check here first. Thanks!
<headius> it was supposed to be fixed in this release but I guess we didn't get all the files
<headius> sigh
<headius> enebo: this would be a trivial thing to spin for a 9.2.2
<headius> jeremyevans: workaround of reinstalling rake works yah?
<jeremyevans> headius: it should, but I haven't tested that yet
<enebo> yeah on linux I expand -bin which is just -dist and I ran rails/guantlet
<lopex> enebo: and no clues about that bundler issue on windows ?
<lopex> enebo: I could assist
<enebo> lopex: if you can find out something other than admin that would be great
<lopex> enebo: but something must have changed in jruby
<enebo> jeremyevans: I guess one of my verify scripts is gem install rails as first thing which pulls in what is missing
<lopex> enebo: do you recall what was it ?
<enebo> lopex: no I assumed you always had to be admin so the fact you can get it to sometimes work as non-admin is confusing...i don't know
<lopex> enebo: and that is false as far I can tell
<enebo> lopex: ok well yeah figuring it out would be very helpful
<lopex> enebo: I have two machines that are consistent with that regression
<lopex> and no changes user wise
travis-ci has joined #jruby
<travis-ci> jruby/jruby (restore_open_hash:4b6d2ea by Charles Oliver Nutter): The build has errored. (https://travis-ci.org/jruby/jruby/builds/451617594)
travis-ci has left #jruby [#jruby]
<enebo> This is inscrutable thusfar...I see what I think is the code doing it but I don't see any pattern which would cause it
<lopex> enebo: you;re responding to that bundler issue ?
<enebo> no I am trying to figure out the packaging issue
<enebo> looks like lib/core.rb is not including much of rake and probably other gems
<lopex> ah
<enebo> lopex: but if you can figure out what is wrong with the permissions on windows that would be super nice
<lopex> enebo: I'm not a windows expert, but I'll try
<lopex> enebo: and I know it's not an AD thing
<lopex> so local
<enebo> lopex: ok that was a misunderstanding on my part
<lopex> enebo: if only there was strace for windows ?
<enebo> lopex: if you are convinced this is a regression just bisecting it would help but I am not sure how you would do that
<lopex> enebo: huh
<lopex> enebo: that would go back to previous release
<lopex> but yeah
<lopex> enebo: worst case is third party
<enebo> "gems/#{bgem[0]}*#{bgem[1]}/*",
<enebo> maybe this is it?
<lopex> might be
shellac has quit [Quit: Computer has gone to sleep.]
<enebo> it appears to be it