ChanServ changed the topic of #crystal-lang to: The Crystal programming language | | Crystal 0.35.0 | Fund Crystal's development: | GH: | Docs: | Gitter:
deavmi has quit [Ping timeout: 258 seconds]
deavmi has joined #crystal-lang
<FromGitter> <watzon> Idk if anyone will find this useful
postmodern has joined #crystal-lang
<postmodern> what is the crystal equivalent of the $0 global variable?
<FromGitter> <Blacksmoke16> i.e. whatever file is running?
<FromGitter> <Blacksmoke16> *FILE* references the full path to the currently executing crystal file.
<FromGitter> <Blacksmoke16> > *FILE* references the full path to the currently executing crystal file.
<FromGitter> <Blacksmoke16> `__FILE__`
<postmodern> Blacksmoke16, looking for the executable program name, not the source file name
<FromGitter> <watzon> I thought ARGC existed, but I guess not
<postmodern> looks like $0 was removed. Guessing there's some kind of constant or way of getting at the argv
<FromGitter> <Blacksmoke16> `PROGRAM_NAME` constant
<postmodern> bingo thanks
<FromGitter> <Blacksmoke16> any ideas on how to handle this case
<FromGitter> <Blacksmoke16> ```code paste, see link``` []
<FromGitter> <Blacksmoke16> since the serializer ivar is optional, its possible that the else block would be called on the data
<FromGitter> <Blacksmoke16> which it doesnt include `JSON::Serializable`, so it doesnt compile
<FromGitter> <Blacksmoke16> `responds_to?` doesnt work since everything has that method
<FromGitter> <Blacksmoke16> and it doesnt allow for checking for overloads yet
<FromGitter> <dscottboggs_gitlab> ☝️ June 18, 2020 6:42 PM ( ⏎ ⏎ You think that's bad? Take a look at this (! O
<FromGitter> <dscottboggs_gitlab> ☝️ June 18, 2020 6:42 PM ( ⏎ ⏎ You think that's bad? Take a look at this (! I've been trying to translate it into Crystal and it's not easy haha
<FromGitter> <dscottboggs_gitlab> @ImAHopelessDev_gitlab
<FromGitter> <Blacksmoke16> hm, id imagine id have to do something at compile time
<FromGitter> <dscottboggs_gitlab> I thought it would be full of macros but turns out Generics took care of most of it.
<postmodern> does crystal have an equivalent of rack or wsgi for a common http server middleware API?
oddp has quit [Ping timeout: 258 seconds]
<FromGitter> <thelinuxlich> hey guys, help me believe in Crystal potential: I'm converting a heavy scraper implemented in Node.js but I found Crystal consumes the same amount of resources and is much more slower...I tried optimizing everything according to official docs but now I'm feeling like fighting with the language to get the holy grail...if you have 5 minutes to spare, please take a look at my code:
<FromGitter> <watzon> @thelinuxlich the first advice I can give is not to use Halite if you want speed
<FromGitter> <watzon> I was using it initially for my Telegram bot library. As soon as I refactored it out I saw a major speed boost.
<FromGitter> <watzon> Another issue that I can see is that you're spawning a new client in every time a new worker is created. Actually I see you calling `Halite.get` 3 times in each loop which is going to create a new `Halite::Client` instance each time. It's much better to use something like to create a connection pool. Then you can check out clients and check them back in when you're done with them.
<FromGitter> <ImAHopelessDev_gitlab> @dscottboggs_gitlab wow yeah
<FromGitter> <watzon> Lastly (for now anyway) you should take any regular expressions/db queries and separate them out into constants so you're not reinitialize them every time. Probably won't make the biggest difference, but it will definitely at least make some difference.
<FromGitter> <watzon> Oh also `Logger` is deprecated
<FromGitter> <watzon> 2 other things actually 😅. Why are you sleeping for 10 seconds here ( And why initialized your channel ( with 100000 items?
<FromGitter> <ImAHopelessDev_gitlab> that's a lot of damage, wait, channels
<FromGitter> <didactic-drunk> @thelinuxlich I think most of your time is in the HTTP requests or possibly pg. How does changing WORKERS effect runtime? What does your profiler say?
<FromGitter> <didactic-drunk> Also possibly redis or rabbit if they're not local.
<FromGitter> <watzon> Yeah my guess is probably the HTTP requests. Halite is convenient, but it's slow. And they're creating 3 new client instances per loop.
<FromGitter> <thelinuxlich> @watzon I thought buffering the response would add anything, but it doesn't, the 10 seconds are for waiting the rabbitMQ container to go up
<FromGitter> <thelinuxlich> @watzon thanks for the advices, I will replace Halite with raw HTTP::client, if I reuse the HTTP::client instance between fibers, will I have trouble?
<FromGitter> <watzon> Ahh gotcha. Well yeah, the other suggestions will probably improve performance a lot.
<FromGitter> <watzon> In general you need to be careful with allocations
<FromGitter> <thelinuxlich> @didactic-drunk If I raise more the workers I get a lot of SSL i/o error, HTTP EOF error, SSL connection reset by the peer, etc
<FromGitter> <thelinuxlich> I think PG is not the problem because if I strip everything and just insert data it will go fast
<FromGitter> <thelinuxlich> the body I'm using .scan is big(some MB)
<FromGitter> <thelinuxlich> but anyway in Node.js this isn't a problem so it shouldn't be in Crystal, right?
<FromGitter> <watzon> Yeah, try and do what I suggested with a connection pool. You can do something like this ⏎ ⏎ ```code paste, see link``` []
<FromGitter> <thelinuxlich> ok, actually I was using Halite just because of the follow redirect
<FromGitter> <watzon> @thelinuxlich the issue is likely a difference in underlying architecture. I'd be willing to bet that nodejs reuses requests, whereas with Halite you're creating a new instance every time.
<FromGitter> <thelinuxlich> Node.js uses keepalive too, I don't see how to do that in Crystal
<FromGitter> <watzon> Unfortunately not possible with the current `HTTP::Client` I don't think
<FromGitter> <watzon> I believe there's an issue for it. Don't know what's holding it up.
<FromGitter> <thelinuxlich> but even with Connection: close I don't think it should be *that* slower, my code is like 10x slower than Node.js
<FromGitter> <watzon> Like I said, it's the recreation of clients in each loop more than likely. Each time you invoke `Halite.get` isn't spinning up a new `Halite::Client` instance in the background which really slows things down and increases memory usage.
<FromGitter> <watzon> You're potentially creating up to 3000 `Halite::Client` instances simultaneously with your 1000 workers
<FromGitter> <watzon> Another thing that will speed things up is adding the `-Dpreview_mt` flag if you're not already
<FromGitter> <watzon> That will turn on multithreading
<FromGitter> <wyhaines> Node.js is quite a lot slower than Crystal, so my expectation is that once you fix the architectural problems (everyone else has covered them pretty well) with your Crystal code, you will see significantly improved performance.
<FromGitter> <didactic-drunk> More workers means more concurrent requests and more open file descriptors. You can probably hack `.close` in to the HTTP requests and increase WORKERS.
<FromGitter> <didactic-drunk> Someone had worse performance with msgpack in crystal vs Ruby (I think). Maybe large JSON is causing a problem? Can you `curl` a file and benchmark the parsing without network calls?
<FromGitter> <thelinuxlich> JSONs are small, the http body used for regex is big
<FromGitter> <didactic-drunk> Are you compiling with --release -Dpreview_mt?
zorp_ has joined #crystal-lang
<FromGitter> <didactic-drunk> I think @watzon probably solved your problems except for potential network/file descriptor issues.
<FromGitter> <didactic-drunk> If it's still slow use a profiler.
<FromGitter> <watzon> Yeah that could definitely do it too
<FromGitter> <watzon> Speaking of queues though, I'm trying to figure out my own. This question inspired me to fix arachnid and get it current and there's a whole slew of things to update, including the way I implemented the url queue.
<FromGitter> <watzon> I was using `future` and putting them into a pool, then pulling them out when the pool reached a certain size, but tbh I don't know how this code ever worked lol.
<FromGitter> <thelinuxlich> @watzon turning out mt without doing anything else will add perf?
<FromGitter> <watzon> Perf? I can guarantee that just turning on mt won't fix your problems.
<FromGitter> <thelinuxlich> sure, but turning on mt I need to do something to make the code multithreaded right?
<FromGitter> <naqvis> you are doing concurrent executions via Fibers, `-Dpreview_mt` will enable parallelism.
<FromGitter> <naqvis> without this flag, all Fibers are run on single thread
<FromGitter> <naqvis> `mt` enables multi-threaded execution of Fibers
<FromGitter> <wyhaines> @didactic-drunk It was me who was having performance problems with msgpack. They are 100% due to how the msgpack shard implements it's io reads. So, I just implemented some code myself to handle it, and zoom! Back to the races.
<FromGitter> <wyhaines> So I use the msgpack shard for ONLY packing and unpacking of data, and leave putting that stuff out to a socket, and getting it back from the socket to my own code.
<FromGitter> <naqvis> @wyhaines interesting topic. Is that msgpack protocol implications? or its implementation which was the bottle-neck?
<FromGitter> <wyhaines> Well, maybe a little of both. The problem with msgpack is that until you read the message, you don't know how much you have to read. So, there are times when one has to advance a single byte at a time. ⏎ ⏎ I haven't dug into the Ruby library (which uses a C extension) to see if they do anything clever to optimize for this, that the crystal library isn't doing, but since I am using msgpack simply for an
<FromGitter> ... efficient serializer, I just wrote a little framing code that sends the length of the packed message, followed by that message. So, it is reduced to two reads. One very small one of a fixed size to get the length, and then a second longer one to read the packed message.
<FromGitter> <naqvis> thanks for the details
<FromGitter> <wyhaines> I experimented with monkey patching the crystal library to read from the socket in big chunks and operate from an in-memory buffer as much as possible, but in the end it was too much work for too little gain compared to just adjusting my protocol a little bit, and that worked great.
postmodern has quit [Quit: Leaving]
sorcus has quit [Quit: WeeChat 2.8]
zorp_ has quit [Ping timeout: 258 seconds]
sorcus has joined #crystal-lang
<sorcus> Hi. :-)
<sorcus> - what wrong here?
<jhass> sorcus: the compiler is not smart enough for this. _value could be nil in the else branch, so it's nilable when being closured as something might change it until the proc is invoked
<jhass> just move the if check into the proc
<sorcus> jhass: Hmmm... - here in examples similar code says that variable can not be nil...
<sorcus> jhass: Oh, i missed this `or variables bound in a closure`...
<jhass> yeah :)
<sorcus> jhass: Thanks you :-)
<jhass> yw
oddp has joined #crystal-lang
<mps> jhass: last night I rebuilt crystal with 0.35.0 static version and with your patches as they are, everything went ok
<mps> no need to cut part about windows in one of them
<mps> will these patches integrated in 0.35.1, do you know
<mps> will these patches be integrated in 0.35.1, do you know*
<jhass> great :)
<jhass> I'm afraid that's not in my hands, other core members gotta agree to that
<jhass> at the moment it doesn't look like there's great interest in doing so unfortunately
<mps> huh :(
<mps> reading your posts in last few days about Manas policy about push rights, I'm negatively surprised
<mps> I thought that the crystal more open
<jhass> well, fwiw I got them now :) #9508
<mps> nice, hope that this will be merged soon
<frojnd> I would like to create a function which has 2 arguments, first is mendatory, second is optional 1) What kind of functions should I create? Can you point me to docs? 2) Inside function I would like to craete a logic if no second param then ... else ....
<frojnd> I know crystal has some neat definitions for functions just can't remember how are they called
<frojnd> Thank you!
<jhass> sure, keep asking if something is unclear :)
<FromGitter> <rishavs> I am trying to nest string interpolations using a foreach loop in the parent one. But I am not getting any output here. What am I doing wrong? ⏎
<jhass> yes that doesn't work, "each" does not return the block body value in any way. The minimal change to make that work would be: Better yet:
<FromGitter> <rishavs> Thanks @jhass!
<FromGitter> <rishavs> So, I was actually trying to benchmark nested ECRs vs nested string interpolations. I am used to vanilla js where I use nested interpolations often for creating web app views. ⏎ Anyway, pretty happy to say that ECR and string interpolations have roughly the same performance. ⏎ ⏎ ```code paste, see link``` []
<jhass> that's not too surprising considering they're both implemented in essentially the same way :)
<FromGitter> <rishavs> For reason I was biased against ECRs, thinking that they must have some perf overhead. So decided to check. This is the code I used.
<FromGitter> <rishavs> I love how I can simply benchmark things so quickly in Crystale whenever I have any doubts
<jhass> we can also just compare implementation :)
<jhass> for the latter you can trust the doc comment right above or dig deeper into what embed does
<jhass> to_s without argument comes from Object:
<jhass> so yeah, it's really the same thing
<frojnd> Error: $global_variables are not supported, use @@class_variables instead
<frojnd> Is this just out of date doc?
<frojnd> Then I chane $my_var to @@my_var and I get: Error: can't use class variables at the top level
<frojnd> I'm just trying to define global variable
<oprypin> frojnd: no everything is correct
<frojnd> $urls = YAML.parse("./urls.yml"))["urls"]
<frojnd> And I get Error: $global_variables are not supported, use @@class_variables instead
<oprypin> yea
<frojnd> Then again maybe I'm reading wrong docs :/
<frojnd> Also worth mentioneing,... this is just a file with a couple of functions,... no classes
<frojnd> oprypin: so how do I define global variable?
<oprypin> you can't
<frojnd> That I can call from function
<frojnd> Ohh :)
<oprypin> module Foo; @@bar = 6
<oprypin> is the closest
<jhass> or even you don't plan on reassigning, just use a constant
<jhass> the doc you linked is talking about binding external global variables, defined in a library you link against
<frojnd> jhass: Jeah thank you... Gonna read now a bit about modules
<frojnd> Managed to put everything inside one file
<raz> hmm is crystal supposed to work with libssl1.1?
<raz> (getting weird linker errors)
<jhass> if not we should fix it!
<jhass> so
<raz> yea, i'll test some more first
* raz is balls deep in dependency hell
<raz> could be something else is causing it
<raz> uff. aaaand it builds. took only a full day of chasing my tail.
* raz goes to grab a 2pm beer
<jhass> haha, do we want to know the story? :D
<raz> well, i'll tell it anyway :P the symlinked-shards part may actually be mildy interesting to some. sec
<raz> that's the gist basically. doesn't amount to much but took an eternity to find a combination that has recent enough library versions (zstd/sodium) because the auto-build in the shards failed in strange ways that i didn't feel like debugging.
<raz> plus relative refs in shard.yml are a bit of a headache (maybe some kind of vendoring mechanism would be good in the future)
<raz> anyway... next stop, turning this into a static build on alpine. but i'll need a few more beers for that. and aspirine.
<jhass> mmh
<jhass> /home/jhass/crystal/src/ undefined reference to `__sync_fetch_and_max_4'
<jhass> wanna trade?
<jhass> :P
<raz> pffft. just add __def_fetch_and_max_4, duh
<raz> :p
<jhass> the fun part there's no mention of that in the entire codebase :P
<raz> how many times have we told you to stop randomly pulling other peoples branches :p
<raz> jk. that looks and sounds nasty. working on MT stuff?
<jhass> nah
<jhass> 32 bit arm
<jhass> I guess it's just an instruction unavailable there
<jhass> and we still didn't figure out a nice way to include compiler-rt
<raz> hm yea, that's beyond my area of expertise by at least 32 bits
<raz> looks easy enough to me. just include that
<jhass> well its macros all the way down
<raz> phew. well, at least the comment may give a hint; Different instantiations will generate appropriate assembly for ARM and Thumb-2 versions of the functions.
<jhass> ooor I link /usr/lib/clang/10.0.0/lib/linux/libclang_rt.builtins-armhf.a for now and worry later
<raz> sounds good. meanwhile i'll just pray to never bump into that type of error message
<FromGitter> <Blacksmoke16> FWIW, i handled by making my extension reopen the type and make the serializer required, then redefine a `serialize` method to use the serializer
<FromGitter> <Blacksmoke16> so that if you dont require the serializer extension, it doesnt use it, once you do it does
<frojnd> cystal init app my-super-name-with-minues is not so great :) I get module My::Super::Name::With::Minuses
<FromGitter> <Blacksmoke16> use `_`
<frojnd> Yeah, just did :)
travis-ci has joined #crystal-lang
<travis-ci> crystal-lang/crystal#20c6811 (ci/update - Temp: Use nightly in CI): The build passed.
travis-ci has left #crystal-lang [#crystal-lang]
<FromGitter> <ImAHopelessDev_gitlab> hi
<sorcus> ImAHopelessDev_gitlab hi :-)
zorp_ has joined #crystal-lang
<Elouin> Hi, I installed crystal on opensuse tumbleweed as described on the website and now i get on every zypper run: "Access to '' denied."
<frojnd> Interesting
<frojnd> I included module
<frojnd> At the top of the file
<frojnd> I then issued user_input = gets
<frojnd> puts get_verser user_input
<frojnd> And I get error: no overload matches 'get_verse' with type (String | Nil)
<FromGitter> <Blacksmoke16> `gets` can return `nil`
<frojnd> I mean I haven't even entered the id yet
<frojnd> It didn't even ask me for input and it's already complaining..
<FromGitter> <Blacksmoke16> prob fine to just do `gets.not_nil!`
<FromGitter> <Blacksmoke16> right, that was a compile time check
<frojnd> Ok,.. not sure why appending `.not_nil!` helped
<FromGitter> <Blacksmoke16> because it tells the compiler that you know it wont return `nil`
<FromGitter> <Blacksmoke16> i.e. removes `Nil` from the union
<frojnd> Aa
<frojnd> Smart
<FromGitter> <Blacksmoke16> usually `.not_nil!` is a bit of a smell, but in this case its fine, since you know it wont
<FromGitter> <Blacksmoke16> be nil*
<FromGitter> <watzon> Yeah antipattern for sure unless you're 100% sure it's not going to be nil, and even then it can be nice (for safety sake) to use a guard clause. But this should be fine.
<FromGitter> <watzon> Just don't make a habit of it 😉
<raz> sadly in some situations there is no way to avoid it
<FromGitter> <watzon> Sadly
<yxhuvud> There are cases but as you get more experienced the amount of times that happens get more rare.
deavmi has quit [Quit: Eish! Load shedding.]
<FromGitter> <watzon> Definitely
<sorcus> If i increment a counter `@counter += 1` from multiple fibers / threads will it be a thread-safe by default? :-D
deavmi has joined #crystal-lang
<FromGitter> <Blacksmoke16> i doubt it
<sorcus> Blacksmoke16: X-)
<oprypin> why does nobody ever know about `read_line`
<oprypin> that's just strictly better, it can't return nil
<FromGitter> <watzon> sorcus: probably better to use an `Atom`
<sorcus> watzon: You mean a code editor?
<FromGitter> <watzon> 😂
<sorcus> watzon: :-D
<FromGitter> <watzon> Sorry, `Atomic`
<sorcus> watzon: Ok, thanks :-)
deavmi has quit [Read error: Connection reset by peer]
deavmi has joined #crystal-lang
travis-ci has joined #crystal-lang
travis-ci has left #crystal-lang [#crystal-lang]
<travis-ci> crystal-lang/crystal#e1e2bbe (ci/update - Temp: Check distribution-scripts update): The build passed.
jhass has quit [*.net *.split]
jhass has joined #crystal-lang
jhass has left #crystal-lang [#crystal-lang]
jhass has joined #crystal-lang
olbat[m] has quit [Ping timeout: 246 seconds]
beepdog has quit [Ping timeout: 246 seconds]
hamoko[m] has quit [Ping timeout: 256 seconds]
ryanprior has quit [Ping timeout: 256 seconds]
return0e[m] has quit [Ping timeout: 256 seconds]
mistergibson has quit [Quit: Leaving]
<oprypin> nice, a new internet protocol. like https but more fun
<jhass> :P
<jhass> idk, my space key is hanging in case you didn't notice yet :D
HumanG33k has quit [Quit: Leaving]
travis-ci has joined #crystal-lang
<travis-ci> crystal-lang/crystal#5999ae2 (master - Release 0.35.1 (#9503)): The build passed.
travis-ci has left #crystal-lang [#crystal-lang]
<FromGitter> <watzon> I'm getting fun errors too lol
<FromGitter> <watzon> ```code paste, see link``` []
<FromGitter> <watzon> So that's nice
<FromGitter> <watzon> I'm thinking I might just rebuild arachnid from scratch
<oprypin> jhass: do you have libgc built with LARGE_CONFIG
<oprypin> or whatever the flag was, check win.yml
<jhass> probably not, it's whatever alarm's packageis
<jhass> do you remember the tradeoffs?
<jhass> like why is that not a default setting?
<jhass> mmh, it just says "optimize for large (> 100 MB) heap or root set", not that it would change any limits?
<raz> brrr, speaking of packages, my sympathy for alpine is fading
<raz> pkg's don't stick around. you can basically only ever pin to "latest". which will constantly change under your feet
<jhass> as somebody running archlinux even servers, I can tell you that's much less of a problem than people make it
<jhass> *even on
<raz> well, for me it is one right now ¯\_(ツ)_/¯
<raz> ERROR: unsatisfiable constraints: crystal-0.34.0-r0:
<raz> breaks: world[crystal=0.34.0]
<raz> not sure if there ever was a 0.34.0 apk, but if there was, it's now gone
<jhass> IME debians "backport the patch but don't upgrade" hell causes way more trouble
<raz> well, at least on there i get to choose when i want to change pkg versions
<raz> on alpine the version you pinned today might be gone tomorrow
<jhass> actually no, because they just don't have the newer version you want to use :P
<raz> sarcasm detected :p
<raz> well in my case right now they don't have the *older* version i want to use :D
return0e[m] has joined #crystal-lang
<raz> cuts both ways
<raz> guess i'll have to compile my own crystal... sigh
<jhass> archlinux has an community run archive somewhere
<jhass> maybe alpine too
<raz> or is there perhaps a static crystal build somewhere? (is that even possible hmm)
<jhass> but it's just the binary, no proper distribution
<raz> oh nice! well, i only need the binary, i just want to compile my stuff
<jhass> yeah you won't be able to
<raz> thanks for shattering my dreams
<jhass> because it has no stdlib or libcrystal.a
<raz> why are computers still so hard.
<jhass> you can stick it into .build/crystal of a 0.34 git checkout and run make deps
<raz> yea nah. i really just want the "crystal build" that works on my mac to also work in a docker image
<raz> well ok, that part works. but now i want --static
<raz> i'm greedy
<raz> gonna give the crystal alpine image another shot, but last time it gave me weird issues when i tried to install the libs that i also need
<FromGitter> <Blacksmoke16> xml?
<oprypin> jhass: the flag just makes things work. definitely did on Windows
<raz> zstd and sodium
<oprypin> i think it's just smaller limits depending on int size
<jhass> not sure I feel like that's "just working" :P
<jhass> raz: I'd be surprised if really nobody is archiving old alpine packages
<raz> jhass: perhaps someone does, but what does it help when your docker images randomly break
<jhass> don't rebuild them every other day? :D
<raz> it's a known problem in alpine land anyway
<jhass> do a multistage build and only rebuild the application layer?
<jhass> or if there's an archive, just pull the packages from there
<jhass> but of course that gets messy as the dependencies might get ABI incompatible
<raz> aren't these the types of problems docker was meant to solve? :P
<raz> they keep promising us ponies and all we get is more horse apples (not sure how well that translates to english)
<jhass> I knew docker is a hoax from the start :P
<raz> yea. but the next layer of abstraction will finally solve it!
<raz> my faith is strong
<FromGitter> <j8r> An people are starting to put self-contained binaries in docker images
<FromGitter> <j8r> Not against it, that just defeats the first main point of containers
* FromGitter * j8r feeling the container existential crysis
<jhass> I ran across the other day
<FromGitter> <j8r> jhass noise
<FromGitter> <j8r> I was seeking something like this for python, I found pyoxidizer
<oz> re: Alpine package disappearing, I heard that nix was about solving these kind of issues 🤔
<oz> (or Guix if you're less into haskell and more into schemes)
<oprypin> nice
<jhass> it's essentially a minimal snap/flatpak I guess
<jhass> raz: btw, 0.35 is only in in edge, 3.12 still has 0.34
<sorcus> Can be `spawn` used for parallels jobs? X-)
<jhass> concurrent or parallel?
<sorcus> jhass: parallel.
<jhass> with -Dpreview_mt yes
<jhass> but note that's still sorta experimental
<sorcus> jhass: But this limited to number of cores, right?
<jhass> it runs on a thread pool, there's an ENV var to set the size of that iirc, but the default is core count, yes
<sorcus> jhass: So i can't run a hundreds of jobs? :-(
<jhass> where did I say so?
<jhass> what's your payload anyways?
<jhass> if it's IO bound it'll block and Crystal will run something else meanwhile, even without -Dpreview_mt
<sorcus> jhass: i assumed :-D
<jhass> if it's CPU bound, what's the point of running more than number of cores at the same time anyways? They'll just compete with each other for the CPU
<sorcus> jhass: "what's your payload anyways?" - a thousands of hash sums for strings.
<jhass> so CPU bound
<jhass> see above
<jhass> running a thousand of them in parallel will make things slower, not faster
<sorcus> jhass: Hmmm... I didn't thought about this.
<jhass> parallelization is by no means a magic solution, it's a trade off
<jhass> you can easily make things slower compared to smart concurrent execution
<jhass> because parallelization has a higher synchronization overhead
<jhass> for most workloads
<sorcus> jhass: Ok, thank for explanation. :-)
zorp_ has quit [Ping timeout: 264 seconds]
<FromGitter> <wyhaines> @sorcus: If you are CPU bound, your limit is ultimately the number of cores, in any language. Depending on the source of your strings, you might be able to just run multiple processes -- 1 per core -- if you aren't using -Dpreview_mt, and get good results. For IO Bound stuff, multiple threads can actually cost you performance. ⏎ ⏎ I have a project that I am working on that, in it's heart of hears, just
<FromGitter> ... receives data packets from myriad clients, does some mild magic to those data packets, and shoves them somewhere else. ⏎ ⏎ In my crude benchmarks, I can run 4 clients that each hammer a million messages to the server as fast as they can (with everything running under Ubuntu/WSL1 on a Windows 10 laptop), and because the major ... []
<FromGitter> <wyhaines> If I run it multithreaded, the performance drops/
<FromGitter> <wyhaines> It takes about 11 seconds to handle 4 million records sent by 4 clients.
<FromGitter> <wyhaines> In case that wasn't clear. Single threaded -- server handles at least 4000000 records in 8 seconds. Multithreaded, it handles them in about 11 seconds.
<sorcus> wyhaines: Wow, really cool. Thanks you. :thumbs_up:
<yxhuvud> wyhaines: depends a bit on what kind of io though. Disk io doesn't do async after all..
jhass changed the topic of #crystal-lang to: The Crystal programming language | | Crystal 0.35.0 | Fund Crystal's development: | GH: | Docs: | Gitter:
<jhass> duh
jhass changed the topic of #crystal-lang to: The Crystal programming language | | Crystal 0.35.1 | Fund Crystal's development: | GH: | Docs: | Gitter:
<yxhuvud> oh, .1 is out? Nice.
<FromGitter> <wyhaines> Sweet. *downloads 0.35.1*
<FromGitter> <watzon> Oh yay
<FromGitter> <watzon> I really do love how easy this is
bcardiff has joined #crystal-lang
bcardiff has quit [Client Quit]
<FromGitter> <dscottboggs_gitlab> meh, it's just patches. think I'll wait for it to hit the repos
<yxhuvud> what do you mean? I just got it through apt update.
<FromGitter> <dscottboggs_gitlab> I'm on manjaro. Repo maintainers give packages a couple weeks of no bug reports on the arch bug tracker before passing it on. Woes of having Crystal *actually* in your stable distro's repos
<FromGitter> <Blacksmoke16> snap ftw
<FromGitter> <Blacksmoke16> 😉
<FromGitter> <dscottboggs_gitlab> (rather than added as a PPA/snap)
<FromGitter> <dscottboggs_gitlab> not a big snap fan
<FromGitter> <dscottboggs_gitlab> Had too much trouble with it. Especially on non-ubuntu distros
<FromGitter> <Blacksmoke16> oh?
<FromGitter> <dscottboggs_gitlab> yeah I used to run ubuntu and had TG and FF installed from snap. Fonts kept breaking. I'd have to do some weird stuff that took several minutes and I hated it. Plus running into permissions weirdness. I get that that's a good thing because security and all but I still don't like to have to think about it.
<FromGitter> <Blacksmoke16> i really only use it for crystal and some other small stuf
<FromGitter> <Blacksmoke16> mostly cli stuff actually
<FromGitter> <dscottboggs_gitlab> When I tried snapd on manjaro I tried starting it before rebooting and it broke snapd (I think permanently for the installation)
<FromGitter> <Blacksmoke16> 😬
<FromGitter> <dscottboggs_gitlab> > mostly cli stuff actually ⏎ ⏎ Yeah I just use docker for that
<FromGitter> <dscottboggs_gitlab> I have to use docker all the time anyway so it's just easier that way
<FromGitter> <dscottboggs_gitlab> might not be so for people who aren't used to docker though
ua_ has quit [Ping timeout: 260 seconds]
ua_ has joined #crystal-lang
ua_ is now known as ua
<FromGitter> <thelinuxlich> @watzon tried using -Dpreview_mt, but I'm getting a lot of invalid pointers
<FromGitter> <watzon> Happens if things aren't thread safe
<FromGitter> <watzon> Sometimes even if they are. It's still in preview for a reason 😄
<FromGitter> <watzon> You actually inspired me to rework my own web crawler framework. Working on that right now.
<FromGitter> <thelinuxlich> @watzon I followed all your tips and some from this topic I opened in the forum:
<travis-ci> crystal-lang/crystal#262e090 (ci/update - Update CI to use 0.35.1): The build passed.
travis-ci has joined #crystal-lang
travis-ci has left #crystal-lang [#crystal-lang]
<FromGitter> <thelinuxlich> @watzon it's consuming just ~130mb RAM and the speed seems very good, when it ends I will tell you
<FromGitter> <watzon> Have you replaced Halite yet? With some more refactoring I can see no reason why you wouldn't be in the 10-20MB range.
<FromGitter> <thelinuxlich> yeah, only using HTTP::Client now, the updated gist, if you are curious:
<FromGitter> <thelinuxlich> I think it can't save more RAM due to the pools
<FromGitter> <watzon> Ahh nice, you took my advice and used `pool`
<FromGitter> <watzon> Yeah I didn't realize you were working with so many urls
<FromGitter> <watzon> You could save some RAM if you're willing to sacrifice a little speed. May not even need to sacrifice anything. I'd take the number of initial pools down to something like 10.
<FromGitter> <thelinuxlich> no, I actually want to trade RAM for speed
<FromGitter> <thelinuxlich> Oh, I forgot to use the new Log
<FromGitter> <watzon> What I'd actually do is try and see what the maximum size of your pools ends up being. Put the initial size really low, let it run, and then at the end `pp http_pools` and check what size they all end up being.
<FromGitter> <watzon> It's possible that you don't even end up using all that pool space and that you have unnecessarily allocated clients just sitting in there
<FromGitter> <thelinuxlich> but it is on the bootstrap so it won't affect the performance right?
travis-ci has joined #crystal-lang
travis-ci has left #crystal-lang [#crystal-lang]
<travis-ci> crystal-lang/crystal#b52e125 (master - Win CI: Avoid looking for globally installed libs (#9507)): The build passed.
<DeBot> (Win CI: Avoid looking for globally installed libs)
travis-ci has joined #crystal-lang
travis-ci has left #crystal-lang [#crystal-lang]
<travis-ci> crystal-lang/crystal#5f9c848 (ci/update - Update CI to use 0.35.1): The build passed.
<FromGitter> <watzon> It still will affect memory usage since those are being allocated at runtime
<FromGitter> <watzon> Granted if you're ok with high RAM usage it's fine. It won't kill anything, but I like to squeeze performance out of my apps where I can haha.
travis-ci has joined #crystal-lang
<travis-ci> crystal-lang/crystal#2b963f7 (ci/update - Perform TODO): The build passed.
travis-ci has left #crystal-lang [#crystal-lang]
bcardiff has joined #crystal-lang
<FromGitter> <bcardiff> @dscottboggs_gitlab your manjaro snap experience was prior May 2019? If so, I recall that there were some updates at that time regarding manjaro distro and its integration with snap.
bcardiff has quit [Client Quit]
<FromGitter> <dscottboggs_gitlab> > Sometimes even if they are. It's still in preview for a reason ⏎ No, seriously `-Dpreview_mt` is unsound and unsafe. Don't use it unless you're hoping to create more bug reports
<FromGitter> <dscottboggs_gitlab> @bcardiff not sure TBH. perhaps
<FromGitter> <watzon> Hopefully that will be fixed soon? I mean we are almost to v1.0.
<FromGitter> <dscottboggs_gitlab> @watzon the move to 1.0 is largely due to semantic stability. I haven't been around too much lately but AFAIK MT is still unsound for the forseeable future
<FromGitter> <watzon> Sad
<FromGitter> <dscottboggs_gitlab> Yeah, I was hoping that by reimplementing libcsp (a CSP-style MT lib written in C) I could help a bit, but I got stuck on getting tests to pass on my thread-safe RBQ implementation
<FromGitter> <dscottboggs_gitlab> (turns out channels are just a `RingBufferQueue(Atomic(T))`)
<FromGitter> <dscottboggs_gitlab> why doesn't this work?
<FromGitter> <Blacksmoke16>
<FromGitter> <Blacksmoke16> diff scope
<FromGitter> <dscottboggs_gitlab> funny, this works
<FromGitter> <dscottboggs_gitlab> ah thanks I was having a hard time figuring out where the scope was. that clears it up
oddp has left #crystal-lang ["part"]