ur5us has joined #jruby
mark_menard has joined #jruby
mark_menard has quit [Ping timeout: 256 seconds]
mark_menard has joined #jruby
mark_menard has quit [Ping timeout: 240 seconds]
ur5us has quit [Ping timeout: 260 seconds]
Antiarc has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
Antiarc has joined #jruby
rmannibucau has joined #jruby
<rmannibucau> hello everyone, anything changed with rubygems? seems gems can't be fetched anymore through maven
mark_menard has joined #jruby
mark_menard has quit [Ping timeout: 265 seconds]
<headius[m]> [rmannibucau](https://matrix.to/#/@freenode_rmannibucau:matrix.org): marshal data too short?
<rmannibucau> headius[m], ?
<rmannibucau> answering myself, http://rubygems-proxy.torquebox.org/releases/ can replace rubygems for now to let projects build
rmannibucau has quit [Quit: Quitte]
<headius[m]> Is that the error you are seeing?
<headius[m]> That is one known issue that needs a fix. I did not know if anyone else was affected
an_old_dwarf[m] has left #jruby ["Kicked by @appservice-irc:matrix.org : Idle for 30+ days"]
ur5us has joined #jruby
ur5us has quit [Ping timeout: 240 seconds]
nirvdrum has joined #jruby
mark_menard has joined #jruby
mark_menard has quit [Ping timeout: 265 seconds]
mark_menard has joined #jruby
<mark_menard> @headius really weird. I plan on looping back and tracking down the root cause, but at least it runs now.
nirvdrum has quit [Remote host closed the connection]
nirvdrum has joined #jruby
hcatlin has joined #jruby
<hcatlin> Hi Room!
<hcatlin> This is Hampton from the Haml project
<hcatlin> we're having trouble getting our tests to pass on JRuby and was hoping someone here might have a hint for us
<hcatlin> Specifically, the message "Warning: the --client flag is deprecated and has no effect most JVMs" is getting populated into our test results
mark_menard has quit [Remote host closed the connection]
mark_menard has joined #jruby
mark_menard has quit [Ping timeout: 260 seconds]
subbu is now known as subbu|lunch
<enebo[m]> hcatlin: I do not know why but rvm is specifying that after installation. rvm 1.29.3 is what your CI is using and that version was released at some point in 2018. Setting an explicit export JRUBY_OPTS="" might solve it if it is not simple to update rvm on travis.
<hcatlin> oh wow, yeah I didn't actually notice the RVM version
<hcatlin> good catch.
<enebo[m]> JRUBY_OPTS="--dev" is also a carrot in that it typically speeds up ci runs as most code is only run once so optimization is less important
<enebo[m]> hcatlin: it is possible I am wrong in what is setting --client as an option but that warning starts on first ruby command after rvm install
<headius[m]> Travis might be setting that environment
<headius[m]> Update rvm for sure but also look through the environment settings above
subbu|lunch is now known as subbu
<enebo[m]> headius: it might be travis but I think we would see lots of warnings across ci if so
<headius[m]> Yeah it may not be Travis but up until recently they were still setting flags like enable C extension support
<enebo[m]> headius: yeah I will not lay money on anything involving travis env :)
<headius[m]> mark_menard: yeah well if you figure it out I'll be keen to know but I'm glad you got past it
<headius[m]> enebo: so this gzip thing I mentioned on friday, it was from a tweet
<headius[m]> I had him send me an example file and both CRuby and JRuby do unpack less then the total content of the file, compared to gunzip
ur5us has joined #jruby
<headius[m]> $ gunzip --stdout test.gz | wc
<headius[m]> 155 4805 95218
<headius[m]> that's the correct size
<headius[m]> $ gunzip -l test.gz
<headius[m]> compressed uncompressed ratio uncompressed_name
<headius[m]> 8874 19680 54.9% test
<headius[m]> that's about the size that we and CRuby unpack
<headius[m]> have you ever seen anything like that?
<enebo[m]> HAHA
<enebo[m]> so a 95k file only ends up as 19k on Ruby?
<enebo[m]> err sorry you are showing me two things here
<headius[m]> yeah it seems like we and CRuby (both zlib-based though ours is a Java port) stop reading once they have the full reported size of the file
<headius[m]> those are both just gunzip command lines, but the headers for the gzip file show a completely different total size than is actually there
<headius[m]> I posit that tools are ignoring that header and just unpacking until end of stream
<enebo[m]> So if it is a grwoing file it stops at whatever the original stat reported size was?
<enebo[m]> Or what is written is actually wrong and it honors it
<headius[m]> this file was reportedly generated by AWS gzipping log files
<enebo[m]> vs pretending that info is correct
<headius[m]> somewhere, somehow
<headius[m]> it seems like a busted header doesn't it?
<enebo[m]> I could say the turd reference could be applied to something else as well
<headius[m]> I mean why would the headers say the file will be 19k but it's actually 95k
<enebo[m]> Interesting to see gunzip not care but then if you cannot trust the written size why write it
<enebo[m]> I have no opinion on the who is more wrong here but if gunzip/python/go all ignore and just read the data then perhaps that is more right?
<headius[m]> it may be
<headius[m]> I have been trying to find more info on this situation
<enebo[m]> I don't know though. If I had a file of unknown origin and I uncompressed the first n bytes and it had m more I would really wonder if I actually am getting real data or not
<enebo[m]> with that said if someone can add m more bytes then they probably can add a new header length
<headius[m]> yeah I am assuming we and CRuby have code that looks at header and unpacks that much data, because it's consistently the same amount
<enebo[m]> So I wonder what gunzip does if it says it has 19k but it only has 5k?
<headius[m]> you do write headers first and then start compressing the data, so if the file size changed during compression perhaps this is what you get
<headius[m]> heh yeah similar situation... would CRuby just blow up because it can't read all 19kj?
<enebo[m]> in a pure stream I imagine you cannot go back and write it but in that case why would it write any length?
<enebo[m]> unless it writes it at the end or something
<enebo[m]> but that would be even weirder for it to be wrong
<enebo[m]> I have not looked at the structure of gzip data in25 years
travis-ci has joined #jruby
travis-ci has left #jruby [#jruby]
<travis-ci> jruby/jruby (no_sourceposition:d72ec39 by Thomas E. Enebo): The build was broken. https://travis-ci.org/jruby/jruby/builds/731073428 [205 min 4 sec]
<headius[m]> I am looking at that now: https://en.wikipedia.org/wiki/Gzip
<headius[m]> it does look like the size goes at the end
<enebo[m]> I sysadmined in early 90s and there was this product smarch which use gnu tar and gnu zip to write to magnetic tape carousel system
<enebo[m]> They wrote a bunch of "oh crap this is corrupt but lets pretend" to the GNU code
<enebo[m]> gunzip itself is total failure but gnu tar was possible to find next file/dir boundary and just get some of the archive back
<headius[m]> yeah I have seen similar code poking around other libraries
<headius[m]> should be a failure case but if you can cobble together the rest it keeps going
<enebo[m]> gnu tar could not do that recovery itself
<enebo[m]> it is funny I do not think they decided to submit it back upstream
<enebo[m]> but it was a long time ago
ur5us has quit [Quit: Leaving]
ur5us has joined #jruby
travis-ci has joined #jruby
<travis-ci> jruby/jruby (no_sourceposition:a5c8055 by Thomas E. Enebo): The build was fixed. https://travis-ci.org/jruby/jruby/builds/731080190 [209 min 8 sec]
travis-ci has left #jruby [#jruby]
<headius[m]> ship it!
ur5us has quit [Ping timeout: 240 seconds]
ur5us has joined #jruby
_whitelogger has joined #jruby
snickers has joined #jruby
nirvdrum has quit [Ping timeout: 240 seconds]
mark_menard has joined #jruby
mark_menard has quit [Remote host closed the connection]
mark_menard has joined #jruby
<headius[m]> ok I think I have an answer
<headius[m]> this was just fixed this year and adds "zcat" for reading gzip contents that were made from multiple files
<headius[m]> before this there's no way to uncompress pass the first file
<headius[m]> so I did a test
<headius[m]> this seems to be the same behavior, size is only reflecting the first file provided
<headius[m]> arguably it's a gzip bug
<headius[m]> this would also explain why this file uncompresses exactly 32 lines plus newline and then quits... that was the end of the first file
<headius[m]> there doesn't seem to be a way to get gzip to show the multiple files used
<enebo[m]> heh...so what is the turd now
<enebo[m]> is it people concat'ing n zip files into a single file and expecting it to work because gunzip does
<enebo[m]> I guess I don't even care and it appears zcat will work
<headius[m]> basically gzip allows you to specify multiple files that are logically concatenated into the gzip stream
<headius[m]> uncompressing normally will just produce the cat'ed content of those files
<headius[m]> that behavior is not directly supported at the zlib level until the addition of zcat... so presumably gunzip and friends just did the logic at the tool level up until then
<headius[m]> but GzipReader in Ruby does not do it
<byteit101[m]1> headius: Let me know if there's a better spot to discuss concrete constructors in than https://github.com/jruby/jruby/issues/449 (ooh, 3 digit issue!)
<headius[m]> Hah yeah knock down those old bugs
<byteit101[m]1> I filed that issue, and then promptly copied the javafx launcher monstrosity into ruby :-D
sagax has quit [Read error: Connection reset by peer]
mark_menard has quit [Remote host closed the connection]
ur5us has quit [Ping timeout: 240 seconds]
snickers has quit [Ping timeout: 272 seconds]
ur5us has joined #jruby