havenwood changed the topic of #ruby to: Rules: https://ruby-community.com | Ruby 2.7.2, 2.6.6, 3.0.0-preview1: https://www.ruby-lang.org | Paste 4+ lines of text to https://dpaste.de/ | Rails questions? Ask in #RubyOnRails | Books: https://goo.gl/wpGhoQ | Logs: https://irclog.whitequark.org/ruby | Can't talk? Register/identify with Nickserv first! | BLM <3
ap4y has joined #ruby
ur5us has quit [Ping timeout: 260 seconds]
znz_jp has quit [Remote host closed the connection]
znz_jp has joined #ruby
bmurt has joined #ruby
ur5us has joined #ruby
kristian_on_linu has quit [Remote host closed the connection]
dcunit3d has quit [Ping timeout: 260 seconds]
bambanx has quit [Quit: Leaving]
ap4y has quit [Quit: WeeChat 2.9]
dcunit3d has joined #ruby
ap4y has joined #ruby
m_antis has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
queip has quit [Read error: Connection reset by peer]
queip has joined #ruby
TCZ has quit [Quit: Leaving]
queip has quit [Excess Flood]
queip has joined #ruby
ChmEarl has quit [Quit: Leaving]
queip has quit [Quit: bye, freenode]
queip has joined #ruby
queip has quit [Ping timeout: 265 seconds]
bkuhlmann has joined #ruby
queip has joined #ruby
bralee has joined #ruby
greypack has quit [Ping timeout: 256 seconds]
howdoi has quit [Quit: Connection closed for inactivity]
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
queip has quit [Ping timeout: 240 seconds]
greypack has joined #ruby
lucasb has quit [Quit: Connection closed for inactivity]
queip has joined #ruby
orbyt_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Rudd0 has quit [Ping timeout: 240 seconds]
bkuhlmann has quit []
justHaunted has quit [Ping timeout: 260 seconds]
queip has quit [Ping timeout: 256 seconds]
maryo has joined #ruby
justHaunted has joined #ruby
justHaunted has left #ruby [#ruby]
Rudd0 has joined #ruby
dcunit3d has quit [Ping timeout: 272 seconds]
chouhoulis has quit [Remote host closed the connection]
ur5us has quit [Ping timeout: 260 seconds]
maryo has quit [Remote host closed the connection]
maryo has joined #ruby
adu has joined #ruby
maryo has quit [Remote host closed the connection]
maryo has joined #ruby
maryo has quit [Remote host closed the connection]
bocaneri has joined #ruby
cthulchu has quit [Ping timeout: 260 seconds]
DTZUZU has quit [Read error: Connection reset by peer]
Liothen has quit [Ping timeout: 260 seconds]
DoverMo has joined #ruby
<DoverMo> how do you run curses on dev build
<DoverMo> do i need to downgrade to 2.6
Liothen has joined #ruby
Technodrome has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ap4y has quit [Ping timeout: 272 seconds]
Liothen has quit [Read error: Connection reset by peer]
Liothen has joined #ruby
<DoverMo> looks like i need 2.4 for backwards compat
Liothen has quit [Read error: Connection reset by peer]
Liothen has joined #ruby
cgfbee has quit [Ping timeout: 264 seconds]
adu has quit [Quit: adu]
cd has quit [Quit: cd]
blender has quit [Read error: Connection reset by peer]
ap4y has joined #ruby
blender has joined #ruby
ap4y has quit [Client Quit]
burgestrand has joined #ruby
vondruch has quit [Quit: vondruch]
vondruch has joined #ruby
hiroaki has quit [Ping timeout: 260 seconds]
vondruch has quit [Client Quit]
vondruch has joined #ruby
helpa has quit [Remote host closed the connection]
helpa has joined #ruby
DTZUZU has joined #ruby
baweaver has quit [Ping timeout: 272 seconds]
ruurd has quit [Quit: ZZZzzz…]
kristian_on_linu has joined #ruby
ruurd has joined #ruby
kenso has joined #ruby
ur5us has joined #ruby
DoverMo has quit [Ping timeout: 260 seconds]
yxhuvud has quit [Remote host closed the connection]
fercell has joined #ruby
cnsvc has quit [Ping timeout: 240 seconds]
ur5us has quit [Ping timeout: 260 seconds]
ruurd has quit [Read error: Connection reset by peer]
ur5us has joined #ruby
ruurd has joined #ruby
ur5us has quit [Ping timeout: 260 seconds]
imode has quit [Ping timeout: 260 seconds]
Akem has quit [Quit: Leaving]
yxhuvud has joined #ruby
vondruch has quit [Ping timeout: 256 seconds]
stdedos has joined #ruby
dionysus69 has joined #ruby
queip has joined #ruby
TCZ has joined #ruby
Technodrome has joined #ruby
stdedos has quit [Ping timeout: 260 seconds]
fercell has quit [Ping timeout: 256 seconds]
vondruch has joined #ruby
ruurd has quit [Quit: ZZZzzz…]
stdedos has joined #ruby
ruurd has joined #ruby
stdedos has quit [Quit: Connection closed]
bralee has quit [Ping timeout: 264 seconds]
havenwood has quit [Ping timeout: 260 seconds]
havenwood has joined #ruby
burgestrand has quit [Quit: burgestrand]
stryek has joined #ruby
Rudd0 has quit [Ping timeout: 240 seconds]
Sina has joined #ruby
Akem has joined #ruby
kenso has quit [Quit: Connection closed for inactivity]
stoffus has joined #ruby
burgestrand has joined #ruby
TCZ has quit [Quit: Leaving]
bmurt has joined #ruby
zapata has quit [Quit: WeeChat 2.9]
BSaboia has quit [Quit: This computer has gone to sleep]
TCZ has joined #ruby
weaksauce has quit [Ping timeout: 260 seconds]
SeepingN has joined #ruby
havenwood changed the topic of #ruby to: Rules: https://ruby-community.com | Ruby 2.7.2, 2.6.6, 3.0.0-preview1: https://www.ruby-lang.org | Paste 4+ lines of text to https://dpaste.de/ | Rails questions? Ask in #RubyOnRails | Books: https://goo.gl/wpGhoQ | Logs: https://irclog.whitequark.org/ruby | Can't talk? Register/identify with NickServ | BLM <3
gray_-_wolf has joined #ruby
havenwood changed the topic of #ruby to: Rules: https://ruby-community.com | Ruby 2.7.2, 2.6.6, 3.0.0-preview1: https://www.ruby-lang.org | Paste 4+ lines of text to https://dpaste.org | Books: https://goo.gl/wpGhoQ | Logs: https://irclog.whitequark.org/ruby | BLM <3
kneefraud has joined #ruby
howdoi has joined #ruby
fercell has joined #ruby
bamdad has joined #ruby
rubydoc has quit [Remote host closed the connection]
rubydoc has joined #ruby
kristian_on_linu has quit [Remote host closed the connection]
rubydoc has quit [Remote host closed the connection]
rubydoc has joined #ruby
BSaboia has joined #ruby
ChmEarl has joined #ruby
Akem has quit [Ping timeout: 240 seconds]
fercell has quit [Ping timeout: 258 seconds]
Akem has joined #ruby
BSaboia has quit [Quit: This computer has gone to sleep]
BSaboia has joined #ruby
teardown has joined #ruby
teardown has quit [Client Quit]
BSaboia has quit [Quit: This computer has gone to sleep]
teardown has joined #ruby
fercell has joined #ruby
BSaboia has joined #ruby
Frankenlime has quit [Quit: quit]
also_uplime has joined #ruby
imode has joined #ruby
cthulchu has joined #ruby
Rudd0 has joined #ruby
BSaboia has quit [Quit: This computer has gone to sleep]
TCZ has quit [Quit: Leaving]
BSaboia has joined #ruby
teardown has left #ruby [#ruby]
teardown has joined #ruby
cd has joined #ruby
danielk43[m] has joined #ruby
maryo has joined #ruby
teardown has quit [Quit: leaving]
fercell has quit [Quit: WeeChat 2.9]
BSaboia has quit [Quit: This computer has gone to sleep]
BSaboia has joined #ruby
BSaboia has quit [Client Quit]
cnsvc has joined #ruby
regedit has joined #ruby
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
imode has quit [Quit: WeeChat 2.9]
Emmanuel_Chanel has quit [Ping timeout: 265 seconds]
BSaboia has joined #ruby
banisterfiend has joined #ruby
maryo has quit [Quit: Leaving]
BSaboia has quit [Read error: Connection reset by peer]
vondruch has quit [Ping timeout: 258 seconds]
Emmanuel_Chanel has joined #ruby
bmurt has joined #ruby
burgestrand has quit [Quit: burgestrand]
Guest7181 has joined #ruby
Emmanuel_Chanel has quit [Read error: No route to host]
Emmanuel_Chanel has joined #ruby
ellcs has joined #ruby
phaul has joined #ruby
jwr has joined #ruby
<jwr> Can anybody tell me why bundler is complaining about a lack of credentials for contribsys when those credentials have seemingly already been set up? https://pastebin.com/raw/fwieycfS
<ruby[bot]> jwr: as I told you already, please use https://gist.github.com
<adam12> jwr: Can you paste the relevant part of the Gemfile?
bocaneri has quit [Read error: Connection reset by peer]
<jwr> adam12: relevant part of the Gemfile: https://pastebin.com/raw/SESSR6BW
<ruby[bot]> jwr: as I told you already, please use https://gist.github.com
noizex has joined #ruby
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
imode has joined #ruby
moeSizlak has joined #ruby
<moeSizlak> sequel is using N'string' quotes, and its noticably slower than just 'string'
<moeSizlak> how can i fix that
<jwr> oh, i see, i had a `.bundle/config` in the root of my repo, which is also where i ran my `bundle install`, but since I was using `--gemfile=gemfiles/Gemfile`, I also needed a config at `gemfiles/.bundle/config`. got it.
weaksauce has joined #ruby
maryo has joined #ruby
burgestrand has joined #ruby
cgfbee has joined #ruby
kneefraud has quit [Remote host closed the connection]
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
dionysus69 has quit [Ping timeout: 272 seconds]
BSaboia has joined #ruby
bvdw has quit [Remote host closed the connection]
bvdw has joined #ruby
bvdw has quit [Remote host closed the connection]
rippa has joined #ruby
mwlang has joined #ruby
<mwlang> I am working on releasing a new gem and have a general question about logging. Is it more common to provide a default Logger to STDOUT or to /dev/null (silent logging) with the expectation that the user must override to configure something more useful? Or another way to view this question: Which is your own preference and why?
maryo87 has joined #ruby
<havenwood> mwlang: Yeah, default Logger to STDOUT with a configurable option to swap in your own logger. SystemD assumes logging to STDOUT, as do many other modern tools, so that makes sense as a default. If logging is a real consideration, making it configurable can be worth it so more folk can integrate their own logging.
maryo has quit [Ping timeout: 264 seconds]
bmurt has joined #ruby
<mwlang> havenwood: thanks for that feedback. I’m definitely making it configurable and it’s expected the user will simply supply their own instantiated, self-configured Logger vs. me implementing all those things within the gem itself.
moeSizlak has left #ruby ["Leaving"]
<clemens3> if it is a gem, why would anybody but you want to see the log output?
ruurd has quit [Read error: Connection reset by peer]
ruurd has joined #ruby
teardown has joined #ruby
teardown has quit [Client Quit]
rippa has quit [Read error: Connection reset by peer]
rippa has joined #ruby
<havenwood> clemens3: A gem is just a library, so I imagine this is one that has a logging aspect. If a gem does something you want to keep a written record of, you log it.
teardown has joined #ruby
<adam12> I'd prefer a null logger with the option to configure one (since I'd probably pass in my global application logger).
<adam12> Unless it's a cli tool, which probbaly should configure a logger oob.
<mwlang> For what it’s worth, it’s an API wrapper library. The logging is primarily for debugging purposes. Its expected (at least to me) that it’s useful to see log output during development to STDOUT while you’re composing your API calls with simple ruby scripts, and then adding your own logger at the point you’re ready buid a real solution suitable for deployment.
teardown has quit [Quit: leaving]
banisterfiend has quit [Ping timeout: 246 seconds]
banisterfiend has joined #ruby
banisterfiend has quit [Ping timeout: 260 seconds]
davispuh has joined #ruby
<adam12> mwlang: In that case, maybe a default logger to stdout but with the level set to warn. Then allow someone to crank it down to debug.
<adam12> In reality I'm not sure it matters much. I'd probably pass in my own logger anyways.
<mwlang> that’s exactly what I had in mind. :-)
roshanavand has joined #ruby
burgestrand has quit [Quit: burgestrand]
banisterfiend has joined #ruby
banisterfiend has quit [Remote host closed the connection]
banisterfiend has joined #ruby
teardown has joined #ruby
phaul has quit [Ping timeout: 240 seconds]
phaul has joined #ruby
burgestrand has joined #ruby
s2013 has joined #ruby
banisterfiend has quit [Quit: banisterfiend]
mwlang has quit [Quit: mwlang]
rippa has quit [Read error: Connection reset by peer]
rippa has joined #ruby
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
teardown has quit [Quit: leaving]
BSaboia has quit [Quit: This computer has gone to sleep]
ur5us has joined #ruby
hiroaki has joined #ruby
BSaboia has joined #ruby
lucasb has joined #ruby
maryo87 has quit [Quit: Leaving]
JayDoubleu has quit [*.net *.split]
Mutsuhito has quit [*.net *.split]
arekushi has quit [*.net *.split]
podman has quit [*.net *.split]
podman has joined #ruby
JayDoubleu has joined #ruby
Mutsuhito has joined #ruby
rippa has quit [Read error: Connection reset by peer]
rippa has joined #ruby
teardown has joined #ruby
rippa has quit [Read error: Connection reset by peer]
teardown_ has joined #ruby
teardown_ has quit [Client Quit]
rippa has joined #ruby
teardown has quit [Quit: leaving]
teardown has joined #ruby
teardown has quit [Client Quit]
teardown has joined #ruby
teardown has quit [Client Quit]
teardown has joined #ruby
rippa has quit [Quit: {#`%${%&`+'${`%&NO CARRIER]
clinth has quit []
clinth has joined #ruby
noizex has quit [Remote host closed the connection]
noizex has joined #ruby
teardown has quit [Quit: leaving]
m_antis has joined #ruby
m_antis has quit [Client Quit]
teardown has joined #ruby
teardown has quit [Client Quit]
s2013 has quit [Ping timeout: 264 seconds]
teardown has joined #ruby
sarmiena_ has joined #ruby
<sarmiena_> I'm a pretty big newb when it comes to compression... but I'd like to use it for large text and store it in elasticsearch. However, i notice when i use ActiveSupport::Gzip.compress('compress me!').bytesize it's actually larger than the original text
<sarmiena_> how should i go about thinking about this?
chouhoulis has joined #ruby
<leftylink> indeed, it is a common thing we see with compression algorithms, where compressing a small thing will not compress it at all but instead make it larger
<leftylink> to find the smallest size at which compression can be achieved, we might write some code that looks like this
<leftylink> &>> require 'zlib'; (1..).lazy.map { |sz| [sz, Zlib::Deflate.deflate(?a * sz).size] }.
<leftylink> .... come on, finish the line corectly
<rubydoc> stderr: playpen: timeout triggered! (https://carc.in/#/r/9w3x)
<havenwood> sarmiena_: The bytesize for your example should be smaller, not larger.
<leftylink> &>> require 'zlib'; (1..).lazy.map { |sz| [sz, Zlib::Deflate.deflate(?a * sz).size] }.take_while { |orig, comp| comp >= orig }.to_a
<havenwood> sarmiena_: 'compress me!'.bytesize #=> 12
<rubydoc> # => [[1, 9], [2, 10], [3, 11], [4, 12], [5, 11], [6, 11], [7, 11], [8, 11], [9, 11], [10, 11], [11, 11]] (https://carc.in/#/r/9w3y)
<havenwood> oh
<havenwood> sarmiena_: never mind
<sarmiena_> hah
<leftylink> so now we see that for all sizes <= 11, the compressed size of that many a will be at least as large as the original
teardown has quit [Ping timeout: 240 seconds]
hiroaki has quit [Ping timeout: 272 seconds]
<leftylink> we can also try some degenerate cases where the input is incompressible but that should be obvious anyway... but we can try it regardless of the obviousness
<leftylink> &>> require 'zlib'; (1..1000).map { |sz| [sz, Zlib::Deflate.deflate((?a..).take(sz).join).size] }.select { |orig, comp| comp < orig }
<rubydoc> # => [] (https://carc.in/#/r/9w3z)
<leftylink> we can do it regardless of the obviousness just to make sure we have not accidentally messed something up
<havenwood> sarmiena_: If you really have strings that short you'd like to compress, you might look at a compression algo meant for small strings like Shoco or Smaz.
<havenwood> sarmiena_: This gem, for example. https://rubygems.org/gems/rsmaz
<sarmiena_> leftylink i see...
<sarmiena_> havenwood i'm compressing emails, which may or may not be small
<sarmiena_> also the emails might have embedded images in them, so not sure how that's going to play out either
DTZUZU has quit [Read error: Connection reset by peer]
<havenwood> sarmiena_: I'd guess emails are roughly 500 bytes of text, on average.
DTZUZU has joined #ruby
<havenwood> sarmiena_: I'm seeing about a 50% compression with gzip and 90 random words.
<sarmiena_> ok that's good
<havenwood> words.bytesize #=> 600
<havenwood> gzip.bytesize #=> 306
<sarmiena_> running this on my db instance select id, length(email_content) from sent_mails where email_content is not null order by length(email_content) desc limit 500;
<sarmiena_> 40MM records worth 250GB
<sarmiena_> gonna take a while hah
stoffus_ has joined #ruby
<havenwood> sarmiena_: How about using gzip at the NGINX or equivalent layer and having Elastic Search use best_compression for index.codec?
<havenwood> I guess that deflates/inflates twice, but ¯\_(ツ)_/¯
<havenwood> Seems nice to let Elastic Search handle its own compression.
<sarmiena_> havenwood my man. haha i think we're on the same brainwave. so i already did it on ES with index: false on the email_content. and i'm seeing in me dev environment it went from 1.5MB to 3ishMB for 243 records
<sarmiena_> so i just got a little concerned
<sarmiena_> and my next venture was to do the NGINX thing and store it to disk and allow NGINX be the gatekeeper
burgestrand has quit [Quit: burgestrand]
stoffus has quit [Ping timeout: 260 seconds]
<sarmiena_> problem is that storing it in the DB is horrible because of the page size limit per row, and PG does this toasting stuff as well, which creates problems for IO
<havenwood> sarmiena_: Can you just use Accept-Encoding header? Then let Elastic Search deflate?
<sarmiena_> havenwood hmm not sure i follow. i read that ES automatically deflates the data without having to do anything else?
<sarmiena_> i
<havenwood> sarmiena_: It compresses with LZ4, but `best_compression` using deflate isn't standard.
<havenwood> sarmiena_: "The default value compresses stored data with LZ4 compression, but this can be set to best_compression which uses DEFLATE for a higher compression ratio, at the expense of slower stored fields performance. If you are updating the compression type, the new one will be applied after segments are merged."
<sarmiena_> ah hah!
<havenwood> sarmiena_: I'm not quite sure I follow your case, but seemed to me like HTTP Accept-Encoding: deflate would handle over the wire and Elastic Search best_compression would handle on disk.
niceperl has joined #ruby
ellcs has quit [Ping timeout: 260 seconds]
<sarmiena_> for the accept-encoding, is that assuming i deflate at the ruby level, then post to ES with that encoding? then ES would expand it, then deflate it again?
<havenwood> sarmiena_: You can do deflation via Accept-Encoding at the Ruby level (Rack middleware, usually) but more often handle it at NGINX layer.
<havenwood> sarmiena_: Yeah, that'd be asking for a deflated version via HTTP, inflating it, and letting Elastic Search deflate it again.
<havenwood> sarmiena_: You could use gzip over the wire and deflate on disk or deflate for both.
<sarmiena_> i don't care too much about the traffic, tbh. it's all private network and i'm making requests directly to ES from the web node (which isn't exposed)
<sarmiena_> just want it deflated on the disk so it doesn't use so much space
<sarmiena_> and out of the PG instance
<havenwood> yeah, then i'd try using elastic search configuration to have it deflate it itself
<sarmiena_> right. makes sense
<havenwood> sarmiena_: often a good idea to just have NGINX: gzip on;
<sarmiena_> gotcha
<sarmiena_> gonna see about allowing best_compression
RickHull has joined #ruby
<RickHull> what's a good way to iterate over an array that consumes the array? just use #each and then reassign to empty array?
<RickHull> for example, doing a one-time operation that returns the most common element
teardown has joined #ruby
<RickHull> also, maybe thanks to MS, the github gist link goes to /discover, which doesn't make it clear or obvious how to post a gist
<RickHull> I'm struggling to find the place to actually post a gist
<RickHull> it might be time to rethink paste hosts
<RickHull> oh, now it says dpaste.o -- maybe I had some cached version :D
<RickHull> https://dpaste.org/SCSX -- is this still close to optimal?
TCZ has joined #ruby
<RickHull> oh, it's ChanServ still saying to paste to gist.github.com -- let me reiterate my disdain for the new gist interface
niceperl has quit [Quit: Leaving...]
ur5us has quit [Ping timeout: 260 seconds]
<havenwood> RickHull: Do you want to modify the receiver then?
<havenwood> RickHull: I guess you could: elements.reject { |element| do_things(element); true }
ur5us has joined #ruby
<havenwood> RickHull: I mean: elements.reject! { |element| do_things(element); true }
<havenwood> RickHull: I mean: elements.select! { |element| do_things(element); false }
<havenwood> RickHull: Or #each and call #clear at the end.
<havenwood> &>> a = [1, 2, 3]; a.each { }.clear; a
<RickHull> havenwood: ok, thanks, this is interesting
<rubydoc> # => [] (https://carc.in/#/r/9w44)
bmurt has joined #ruby
<RickHull> my thinking is: go through the array one at a time and get rid of concerns as you go
<RickHull> summarize the array and get rid of it
<havenwood> &>> a = [1, 2, 3]; a.reject! { |n| break if n > 2; true }; a
<rubydoc> # => [3] (https://carc.in/#/r/9w45)
<havenwood> RickHull: Yeah, #reject! or #select! would remove as it iterates. A #clear at the end seems a bit cleaner but it's all at once.
<havenwood> &>> a = [1, 2, 3]; a.keep_if { |n| break if n > 2; false }; a
<rubydoc> # => [3] (https://carc.in/#/r/9w46)
<havenwood> &>> a = [1, 2, 3]; a.keep_if { false }; a
<rubydoc> # => [] (https://carc.in/#/r/9w47)
<RickHull> lots of new subtlety; I'm trying to get used to the new best practices
bmurt has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
orbyt_ has joined #ruby