<headius>
they're used to manipulate the JVM's nanoTime value
<headius>
well, at least JRuby proper doesn't pass them to C
mkristian has joined #jruby
<ebarrett>
headius: oh really?
<headius>
really!
<ebarrett>
in that case I can't use this
<headius>
that's right!
<headius>
:-)
<ebarrett>
yay
<headius>
add 'em to jnr-constants and we can regenerate on whatever platform
<headius>
do the fixme :-)
<ebarrett>
i was unable to find a way to read the MONOTONIC_RAW clock with nanoTime
<headius>
probably why I didn't add it
<ebarrett>
i see
<chrisseaton>
how about we call your native function through FFI?
<ebarrett>
sure
<ebarrett>
i have a shared object which exposes a function that does exactly this
<ebarrett>
In java benchmarks I call it using JNI
<ebarrett>
can we do somethign similar?
shellac has joined #jruby
<chrisseaton>
Yeah we can call it using JNR
<ebarrett>
can you show me how?
<chrisseaton>
Let me know the function signature, the library name, and how you'd expect calling it in Ruby to look and I'll see if I can hook those up
<ebarrett>
(sorry to be a pain)
<chrisseaton>
It's a bit all over the place so best give me those details and I can give you a basic patch to tweak from
<chrisseaton>
shame about having to patch - makes it all more delicate
mattwildig has joined #jruby
bb010g has joined #jruby
<eregon>
chrisseaton: looks like defining Process::CLOCK_MONOTONIC_RAW would fix compat and fix this simpler, no?
<chrisseaton>
eregon: if you've got an idea of how to do that yeah
<chrisseaton>
but this solution also means literally the same native code is run for the clock for each vm
<ebarrett>
chrisseaton: thanks
<chrisseaton>
np
pawnbox_ has quit [Ping timeout: 250 seconds]
skade has quit [Quit: Computer has gone to sleep.]
nateberkopec has joined #jruby
skade has joined #jruby
aadam21 has joined #jruby
samphippen has joined #jruby
aadam21 has quit [Read error: Connection reset by peer]
nateberkopec has quit [Ping timeout: 240 seconds]
nateberkopec has joined #jruby
<enebo>
jsvd: you saw 15 minutes but are you sure you did not have vmmstat running?
jensnockert has quit [Read error: Connection reset by peer]
jensnockert has joined #jruby
<jsvd>
enebo: doing a couple more runs to check
<jsvd>
enebo: with visualvm is < 2 minutes, sometimes as soon as I open the process in visualvm
<enebo>
jsvd: I have been running a tight stat loop on an existing dir with both visualvm and vmmstat for 15 minutes now with no crash but I am wondering if there is some race in tooling
<enebo>
jsvd: it could still be us not cleaning something up but if that is the case it is slowly building up in windows kernel or something
<jsvd>
I just have no idea how to get more debug data on this, windows just says "problem! stopped! fixed."
<enebo>
jsvd: the other thing I can try is to take pressure off of GetFileAttributesEx by using byte[] as signature which I believe will not do any of this page allocation
<headius>
hello again
<headius>
chrisseaton: probably picking up from the shebang in that case
<enebo>
jsvd: I am also using Java 7 u40 :)
<enebo>
jsvd: perhaps if I jump to to Java 8 this will crash more readily
<jsvd>
C:\Users\Jsvd>java -version
<jsvd>
java version "1.8.0_65"
<jsvd>
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
<jsvd>
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
<jsvd>
enebo: ok I started a tight loop 5 minutes ago
<jsvd>
just crashed...actually when I did java -version
<enebo>
jsvd: I did get a single crash but it had to be way over an hour
jensnockert has quit [Read error: Connection reset by peer]
<enebo>
heh
<jsvd>
what the hell
pawnbox_ has joined #jruby
<jsvd>
that might have been a coincidence
<enebo>
jsvd: did you have anything connected to it that time?
<jsvd>
enebo: nothing
<enebo>
jsvd: and only 2 minutes
<enebo>
hmm
<enebo>
oh 5 minutes but still
jensnockert has joined #jruby
<jsvd>
restarting the machine and testing again
<enebo>
ok yeah I am also wondering if I have only been running with vmmstat connected
<enebo>
possibly I am slowing down stats somehow and not crashing quickly
<jsvd>
with visualvm it actually sped up
<enebo>
jsvd: yeah I just did a 20 minute run but both were connected (althugh first 5 minutes only visualvm was)
blaxter has joined #jruby
<jsvd>
and nothing?
mattwildig has quit [Remote host closed the connection]
<enebo>
is still running
* jsvd
cries
<enebo>
worst case we go back to _wstat64 and not use windows apis for 1.7.23
skade has quit [Quit: Computer has gone to sleep.]
<enebo>
but I would like to solve this since people want long path names and UNC path support
<jsvd>
enebo: doing a few more runs. currently a fresh restarted windows 7, 'bin\jruby -e " loop { File.stat('bin') }"', nothing attached
<enebo>
jsvd: ok I also realized I am testing with 9k master (which should not really matter for this since they both are hitting the same entry point into jnr-posix) but i will run with the same snapshot
jensnockert has quit [Remote host closed the connection]
pitr-ch has joined #jruby
<jsvd>
enebo: after 15 min on a fresh reboot still nothing. I have to leave for 45 min, will check when I get back
<enebo>
jsvd: ok. I have had issues with windows using up some internal resource from me debugging and then it is in a shit state
<enebo>
jsvd: like something in the kernel does not figure out somehing has crashed and is holding a reference
samphippen has quit [Excess Flood]
<enebo>
jsvd: but that was on some GDI stuff not something as low as FS
<enebo>
jsvd: jus crashed for me now
dinfuehr has joined #jruby
<enebo>
jsvd: so about an hour of running :|
<enebo>
I wonder how many stats we do of it they get slower over time
samphippen has joined #jruby
mattwildig has joined #jruby
lanceball is now known as lance|afk
dinfuehr has quit [Ping timeout: 240 seconds]
<enebo>
heh wow I have only been running this like 2 minutes and it has done 4 million stats
pitr-ch has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
samphipp_ has joined #jruby
digitalextremist has quit [Ping timeout: 260 seconds]
cremes has quit [Remote host closed the connection]
samphippen has quit [Ping timeout: 255 seconds]
cremes has joined #jruby
pitr-ch has joined #jruby
skade has joined #jruby
digitalextremist has joined #jruby
pawnbox_ has quit [Ping timeout: 240 seconds]
mkristian_ has quit [Quit: This computer has gone to sleep]
pitr-ch has quit [Client Quit]
lance|afk is now known as lanceball
<jsvd>
enebo: well it was crashed when I came back
<jsvd>
enebo: so took at most 50 minutes
<enebo>
jsvd: I am going to switch to byte[] and make the WString contents in windows to avoid this extra windows memory allocations
<enebo>
at 84 million stats so far
<jsvd>
enebo: sounds good.
<jsvd>
maybe timing every 10k, to check if they're getting slower
<enebo>
jsvd: yeah I can add that but eyeballing and so far the stream of printing out at every 10k seems about same speed as starte
skade has quit [Quit: Computer has gone to sleep.]
mattwildig has quit [Remote host closed the connection]
skade has joined #jruby
jensnockert has joined #jruby
pawnbox_ has joined #jruby
vtunka has quit [Quit: Leaving]
jensnockert has quit [Read error: Connection reset by peer]
subbu is now known as subbu|breakfast
jensnockert has joined #jruby
vtunka has joined #jruby
samphipp_ has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
baroquebobcat has joined #jruby
mattwildig has joined #jruby
jensnock_ has joined #jruby
jensnockert has quit [Ping timeout: 246 seconds]
rcvalle has joined #jruby
camlow325 has joined #jruby
subbu|breakfast is now known as subbu
pitr-ch has joined #jruby
mattwildig has quit [Remote host closed the connection]
mkristian has joined #jruby
qmx has joined #jruby
qmx has quit [Changing host]
qmx has joined #jruby
jensnock_ has quit [Remote host closed the connection]
jensnockert has joined #jruby
samphippen has joined #jruby
skade has quit [Quit: Computer has gone to sleep.]
mkristian has quit [Quit: This computer has gone to sleep]
vtunka has quit [Quit: Leaving]
pitr-ch has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
jensnockert has quit [Remote host closed the connection]
dcolebatch has quit [Quit: Connection closed for inactivity]
pawnbox_ has quit [Remote host closed the connection]
thedarkone2 has joined #jruby
bbrowning is now known as bbrowning_away
brauliobo_ has quit [Ping timeout: 244 seconds]
djbkd has joined #jruby
djbkd has quit [Remote host closed the connection]
djbkd has joined #jruby
<enebo>
jsvd: crashed on me after 458 million stats. I am going to make byte[] version now and see if it runs longer or not
<enebo>
jsvd: The actual crash is an access violation so it is trying to hit protected memory somewhere
<jsvd>
enebo: makes sense from the brutal way windows kills it
lanceball is now known as lance|afk
<enebo>
jsvd: I am not saying this is acceptable but do you have any idea how many stats occur in the most brutal logstash scenario?
<jsvd>
enebo I can image a user watching 200 files and setting a stat interval of 1 second
<jsvd>
So.. about 17 million a day?
<enebo>
so I think I was seeing about 40,000 a second in this scenarion but I guess we can just do the math
<enebo>
assuming you reboot is giving you a consisent number to what I am seeing (and the a reboot helped in some way)
<jsvd>
A very aggressive user could see this in a week, that's my guess
<enebo>
My one other observation is I have never tried do this many stats on windows (or elsewhere) so for all I know a C program would blow up like this
<jsvd>
Very subjective guess..
<enebo>
If this next experiment does not yield fruit I might have to write a c program and see
<jsvd>
Well it did not happen with jruby 1.7.19
<enebo>
jsvd: are you sure?
<enebo>
jsvd: do you run a torture test like this normally?
<jsvd>
Actually no, I'm not sure. That was the other problem with the leak
<jsvd>
I'm afk but I can try in 3 or 4 hours
<enebo>
if you are still seeing frequent crashes after reboot then something is pretty broken…if we only see it after 1.5 hours in a tight loop it could possibly have always been this broken
<enebo>
jsvd: it would ne a good data point
<jsvd>
I'm thinkong of simply doing the same tight loop on 1.7.19
<enebo>
jsvd: I went from using _wstat64 to GetFileAttributesExW but really no doubt stat64 does call that function under the covers
<jsvd>
To test
<enebo>
jsvd: yeah it will be a good data point. It will not rule out it is our problem or not but if we still have it blowing up then we learned something at least
bbrowning_away is now known as bbrowning
pietr0 has joined #jruby
sebstrax has quit [Quit: Connection closed for inactivity]
mattwildig has joined #jruby
balo has quit [Ping timeout: 264 seconds]
balo has joined #jruby
mattwildig has quit [Ping timeout: 252 seconds]
mkristian has joined #jruby
jensnockert has joined #jruby
blaxter has quit [Quit: foo]
djbkd has quit [Remote host closed the connection]
<nirvdrum>
enebo: I didn't keep count, but I got a nice "Process finished with exit code -1073741571 (0xC00000FD)"
djbkd has joined #jruby
<enebo>
heh
<enebo>
nirvdrum: ok different than mine
<nirvdrum>
I'm running outside of JRuby, too.
<nirvdrum>
I was just doing stats in a loop.
<enebo>
nirvdrum: oh cool
<nirvdrum>
I'm going to try cutting out the stat code and just do allocateDirect() in a loop.
jensnockert has quit [Remote host closed the connection]
<enebo>
nirvdrum: so this one answer implies it is normally stack overflow
<enebo>
nirvdrum: but something could be corrupting the stack
<enebo>
funny how many people get these errors from random bad binary updates to apps
<nirvdrum>
"A new guard page for the stack cannot be created."
<enebo>
nirvdrum: more than an hour for me…1:15-1:30
<enebo>
I am at 281 million now using byte[] and not depending on the memory page
<enebo>
so I beat previous run of 258
<nirvdrum>
Interesting. I'm just doing the page allocation & free right now.
<enebo>
This has some more fragmentation potential but I wonder how big a deal this really is? It is also more malloc-like calls on windows
<enebo>
in a sense I feel like the memorymap in ffi might have more leaking potential as well
<enebo>
it only takes an occasional hard ref to keep 8 forever
<enebo>
8k
<enebo>
although in the case of stat there are nothing but short-lived native memory
<nirvdrum>
I think there's a potential performance issue, but that's better than crashing.
<nirvdrum>
IIRC, that byte[] needs to be written out to native memory somewhere in FFI to make the call.
jensnockert has quit [Remote host closed the connection]
djbkd has joined #jruby
<enebo>
nirvdrum: but so does WString
<enebo>
nirvdrum: it is just a POJO with a convert method
<enebo>
nirvdrum: it also has to copy
<enebo>
nirvdrum: it will save how many allocate memory calls there is though
<nirvdrum>
enebo: Ahh, right. The converter is doing the allocation here.
<enebo>
309 million
<nirvdrum>
So the downside is you'll end up doing N allocations, rather than N / 256.
<enebo>
yeah
<nirvdrum>
How are you counting?
<enebo>
and how expensive is it? Don’t know
<enebo>
I incr a ruby var and print out every 10k
<nirvdrum>
Ahh. I'm still just doing Java.
<enebo>
so I have that tiny delay of executing that
<enebo>
nirvdrum: but you can do it with a long and println too if you want
<enebo>
nirvdrum: in theory you should hit it way faster
<nirvdrum>
Right.
<enebo>
chrisseaton: neat re JS
slash_part is now known as slash_nick
jensnockert has joined #jruby
mattwildig has quit [Remote host closed the connection]
dfr has joined #jruby
<nirvdrum>
enebo: It seems to be problematic in the stat() call graph, rather than just allocateDirect. I got 53 MM stat calls before crashing. I had well over 1 B allocateDirect calls before I just stopped it.
<jsvd>
enebo: running tight stat loop on 1.7.19 for an hour now. no crash yet
<enebo>
jsvd: I am up to 408M stat calls using the byte[] call siignatures bypassing the MemoryMap stuff on jnr-ffi
<enebo>
nirvdrum: ^ So not sure but I am waaay past previous run using byte[] instead of WString with native converter
<enebo>
I will stop at 1 billion calls
<enebo>
no one needs to start more than a billion times — Bill Gates
<enebo>
stat
<enebo>
DERP
<jsvd>
don't underestimate logstash users
<jsvd>
:)
<enebo>
heh
<nirvdrum>
enebo: I'm cool with fixing the immediate problem. This just isn't the only code using WString, so it'd be good to get to the bottom of the root issue while it's fresh in everyone's head.
<enebo>
nirvdrum: I agree of course but I have a lot to do in the next 4 days so I might be more amenable to quick fix with an issue
<nirvdrum>
Look at Mr. Ihavetogiveatalk over here.
<enebo>
in truth logstash has been stuck on 1.7.19 for a while so I am not sure it will kill their company to wait a couple of week either but it is shitty that people cannot upgrade over this
<enebo>
nirvdrum: I am trying to make inlining work again for the talk also :)
<enebo>
nirvdrum: we made some large decisions which has had unforeseen problems crop up
<nirvdrum>
Ahh, well good luck.
<enebo>
bit rot is a hell of a drug
<jsvd>
to give more info, logstash 1.5.0 (quite old) ships with .19, since then, currently 1.5.4 and 2.0.0 ship with .22
<enebo>
jsvd: ok well that raises my desire to get this out even more
<jsvd>
I think we're relying on some fixes since .19 so we can't release 2.0.1 that downgrades jruby
<jsvd>
BUT I understand that you have your own schedule
<jsvd>
so whenever possible, we can survive :)
<enebo>
jsvd: I think we can probably just put out 1.7.23 as soon as this is nailed
<jsvd>
it will much appreciated
<jsvd>
thanks for all the work so far, you too nirvdrum :)
<enebo>
I also need 9.0.4 out since jruby -S irb is broken and we get a new issue reported against it about every other day
skade has joined #jruby
<nirvdrum>
jsvd: No problem.
<enebo>
jsvd: we need to be known for being stable and the big runs in impls so it is important when we find bugs like this
skade has quit [Client Quit]
<enebo>
Arguably it is a busy fall though so we might cut a couple of corners for now
skade has joined #jruby
yfeldblum has quit [Remote host closed the connection]
dinfuehr has quit [Remote host closed the connection]
dinfuehr has joined #jruby
jensnockert has quit [Remote host closed the connection]
dinfuehr has quit [Ping timeout: 250 seconds]
temporal_ has joined #jruby
mattwildig has joined #jruby
temporalfox has quit [Ping timeout: 240 seconds]
rsim has quit [Quit: Leaving.]
pitr-ch has joined #jruby
lanceball is now known as lance|afk
samphippen has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
bbrowning is now known as bbrowning_away
tcrawley is now known as tcrawley-away
kstuart has joined #jruby
<mkristian_>
enebo, around ?
<enebo>
mkristian_: howdy
<enebo>
mkristian_: ben just left for day so we won’t get anything more than that dump for today
<mkristian_>
since this is not enough the gist from bbrowning_away - all I can guess is that some jar did not get loaded :(
<mkristian_>
oh
<enebo>
mkristian_: at org.jruby.RubyKernel.require(org/jruby/RubyKernel.java:1040) [jruby.jar:]
<enebo>
that stood out as a weid bracketed sitnr on the end
<enebo>
string
<enebo>
I don’t actually understand how that string is in the backtrace :)
yfeldblum has joined #jruby
<mkristian_>
ok than I will see if I can run those torque box integration tests myself. ah - the bracket looks like the artifact name at other places
<enebo>
mkristian_: thanks. I am not sure about the windows thing. I wish we had a test I knew was correct
<mkristian_>
enebo, let me see if I can give you me initial test I used - not sure if this one helps on windows
<enebo>
I am not positive I created the jar properly but you can see that this genrates absolute path
<mkristian_>
enebo, so it is not the expand_path here. could you add -Xdebug.loadService=true to add the extra output to the gist ?!
<enebo>
mkristian_: sure thing
<enebo>
mkristian_: reload
<enebo>
654 million stats!
<mkristian_>
enebo, thanx - it only shows that the specifications/* is not found.
<enebo>
aha
bb010g has quit [Quit: Connection closed for inactivity]
<enebo>
as I said I was not sure I made this right
<enebo>
mkristian_: I will mak this properly
<enebo>
I should have turned this on
<mkristian_>
maybe Gem.path is not right
<mkristian_>
p Gem.path; p Gem::Specification.dirs
<enebo>
mkristian_: jar does not contain specififications and that script basically sets the gem to the dir
<enebo>
mkristian_: look at a1.rb in that gist
drbobbeaty has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<mkristian_>
but without the specifications directory rubygems can not find any gems.
<enebo>
mkristian_: I think I just made a mistake here…I was trying to stuff entire gems hierarchy in a .jar and load gems
<enebo>
mkristian_: I just stuffed the gems there
mattwildig has quit [Remote host closed the connection]
<mkristian_>
you need gems/** and specifications/* inside your jar
mattwildig has joined #jruby
<enebo>
heh yeah I realized that based on what you said above
<enebo>
mkristian_: error.txt in that gist has been updated with new error message. looks like perhaps I just need to set more stuff perhaps
<enebo>
mkristian_: it is only looking at install dirs specification and not in the the one in the jar file
<enebo>
mkristian_: OR it is looking for it and does not think it exists so it skips the check
<mkristian_>
Gem::Specification.dirs
<mkristian_>
is are the collection specifications directories. if you add `p Gem::Specification.dirs` before the require then you will see if ruby gems found the directory inside the jar
<enebo>
mkristian_: updated
<enebo>
I see it in there
<enebo>
some weird dirs as well
<mkristian_>
this actually looks good - hmm
<mkristian_>
this one is unexpected /C:/opt/specifications
<enebo>
yeah
<enebo>
I suppose if we knew which method specififications uses to check for gem specs like dir[] we can see if it works
<enebo>
or perhaps it says no such dir to first one
<mkristian_>
Gem::Specifications.dirs does check File.directory? - so this part is OK. not sure about the directory listing of the same.
<mkristian_>
and the debug.loadService=true does not print directory listings even though it goes almost through the same code
skade has quit [Quit: Computer has gone to sleep.]
<enebo>
mkristian_: for sanity I comfrimed that directory? returned true
<mkristian_>
:)
<mkristian_>
and Dir[].entries ?
<mkristian_>
that is what ruby gems does: Dir[File.join(dir, "*.gemspec")]
<enebo>
mkristian_: I will try. I did Dir[file + “/*” and all the gemspecs returned but file.join is a better test and probably what rubyspec does :)
<mkristian_>
well, that should be the same.
<enebo>
mkristian_: yeah same but I realized this has to be finding the gemspec since it is not complaining about no gem
<enebo>
mkristian_: it is just unable to require the file once it thinks it found the spec
<enebo>
mkristian_: so I will turn on loadService again
<mkristian_>
yes, please. you copied the lot of gems into the jar ?!
<jsvd>
enebo: I'm off to bed but I can say that after 3 hours of stats, 1.7.19 didn't crash.
<enebo>
jsvd: 714 million stats and counting with me recent change
<enebo>
mkristian_: I update the moar.txt in that gist
<enebo>
oh wait
brauliobo_ has joined #jruby
<enebo>
I deleted the gem from the FS but not the actual gemspec
<enebo>
mkristian_: ok so I removed that to make sure it did not get confused
<enebo>
mkristian_: you can see it is not searching the jar at all
<mkristian_>
yes
<enebo>
mkristian_: but the gemspec is only on the jar so I believe it is finding the gemspec in the jar
<enebo>
mkristian_: so we must have one other FS method which is not understanding this absolute path drive letter jar syntax somewhere
<enebo>
mkristian_: Or I am doing something stupid in my env
<mkristian_>
I think so too. but I also create such a jar right now to verify it is really a windows only problem
<enebo>
mkristian_: I believe the original report was complaining that require_relative did not work on windows in a jar and I think active_support uses them
<enebo>
mkristian_: but if that is still an issue it may be totally unrelated
<mkristian_>
enebo, on my mac such a jar works with your a1.rb. if the File.realpath( "jar:file:#{File.expand_path('./a.jar', File.dirname(__FILE__))}!/" ) works then require_relative should work as well
<enebo>
mkristian_: ok but I wonder if something is weird becaue it is absolute and have an extra : in it
cristianrasch has quit [Quit: Leaving]
<mkristian_>
rubygems does quite a lot of file operations since it needs to find all the require paths of each gems.
<mkristian_>
do you mean extra c:
<mkristian_>
possible
<enebo>
mkristian_: yeah
<enebo>
mkristian_: I just know even your last patch which fixed issues had a lot of ‘:’ in the regexp
<mkristian_>
yes, and it probably see the c: as part of the protocol prefix
<mkristian_>
should probably get windows VM again
brauliobo has joined #jruby
<mkristian_>
enebo, when I worked on this patch I realized the file://hostname/path is the actual definition and I am not sure if ruby does honour the hostname in any ways. and if this should be the case ?
<enebo>
mkristian_: hmm. Tons of methods return paths on windows as c:/foo
<enebo>
mkristian_: That line in the a1.rb seems like a reasonable line for setting GEM_PATH
<enebo>
mkristian_: but your changes regressed this from someone’s working code so I think people already expect this
<enebo>
mkristian_: is it ambiguous?
<enebo>
mkristian_: or just something you did not anticipate?
brauliobo_ has quit [Ping timeout: 240 seconds]
<mkristian_>
what was someone's working code ? I never new the file protocol can reach out remote servers
<enebo>
mkristian_: remote servers?
<enebo>
mkristian_: I just meant drive letters
<enebo>
mkristian_: e.g. "jar:file:c:/opt/a.jar!/specifications"
<enebo>
mkristian_: I am not talking about //foo/bar
<mkristian_>
ok - I stick to drive letters. is c:/foo failing or not ?
<enebo>
mkristian_: yes it is failing
<enebo>
mkristian_: I only stuffed my gems dir into a jar and I am trying to load activesupport from it
<enebo>
mkristian_: a1.rb in that gist is what I am running against that jar
brauliobo has quit [Ping timeout: 250 seconds]
djbkd has quit [Remote host closed the connection]
<mkristian_>
enebo, so probably bin/jruby -I. test/jruby/test_file.rb fails at several places on windows
<enebo>
mkristian_: we need to get appveyor going soon and nirvdrum and I talked about trying to tackle some tasks when we are together at rubyconf next week
* nirvdrum
concurs
<enebo>
mkristian_: perhaps we really need appveyor for windows up. I know getting jnr-posix on it was really helpful
<mkristian_>
enebo, yes that will be very helpful with all those path related issues
<enebo>
mkristian_: you have a windows instance?
<mkristian_>
no , otherwise I would try out things myself. I had a VM on my old laptop. but forgot where I got it from