ChanServ changed the topic of #crystal-lang to: The Crystal programming language | | Crystal 0.22.0 | Fund Crystal's development: | Paste > 3 lines of text to | GH: | Docs: | API: | Logs:
<ryanf> @bew presumably something like ?
<FromGitter> <bew> according to his implementation for non-bang versions, he can change the type of key or value, here you're not changing the type of the value
<ryanf> that seems kind of inevitable though? how could you mutate the value type of a hash without messing up existing references to it?
<ryanf> it seems like a Crystal version of transform_values! would always have to preserve the type of the values
Yxhvd has quit [Remote host closed the connection]
Yxhuvud has joined #crystal-lang
zipR4ND has quit [Ping timeout: 268 seconds]
bjz_ has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
LastWhisper____ has quit [Read error: Connection reset by peer]
bjz has joined #crystal-lang
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
_whitelogger has joined #crystal-lang
bjz has joined #crystal-lang
_whitelogger has joined #crystal-lang
<FromGitter> <elorest> It seems that I can set a variable to a constant then call methods on it or initialize it. I can even pass it into a function and initialize and call it. For some reason I can’t see to save it as an instance variable though. Any ideas?
bazaar has quit [Quit: leaving]
bazaar has joined #crystal-lang
_whitelogger has joined #crystal-lang
bjz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
bjz has joined #crystal-lang
bjz has quit [Remote host closed the connection]
bjz has joined #crystal-lang
sz0 has joined #crystal-lang
Raimondii has joined #crystal-lang
Raimondi has quit [Ping timeout: 268 seconds]
Raimondii is now known as Raimondi
<FromGitter> <chronium> Does anybody know if there's something like this for Crystal?
elia has joined #crystal-lang
elia has quit [Client Quit]
elia has joined #crystal-lang
<FromGitter> <bew> @elorest you need to specify the types ?
yogg-saron has joined #crystal-lang
<FromGitter> <deepj> @bew That was incorrect information there. I’ve changed the description in the PR.
<oprypin> chronium, something like what? what's your use case?
<oprypin> sfml, sdl seem to have a superset of this functionality
elia has quit [Quit: Computer has gone to sleep.]
unshadow has joined #crystal-lang
<unshadow> Working with Fibers, should I "sleep x" or "Fiber.current.sleep(x)" ?
<oprypin> unshadow, sleep x
<unshadow> K, thanks :)
<unshadow> sleep will tell the Thread to move into a different Fiber right ?
yogg-saron has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<FromGitter> <chuckremes> unshadow, yes, it will suspend the current fiber and schedule another one to run.
<unshadow> btw, something cool, I managed to get to 160k Concurrent Connections to my Crystal made reverse proxy
bjz has quit [Remote host closed the connection]
zipR4ND has joined #crystal-lang
elia has joined #crystal-lang
elia has quit [Quit: Computer has gone to sleep.]
<unshadow> When I get "GC Warning: Repeated allocation of very large block " is there a way to know where in the code this allocation happenes ?
_whitelogger has joined #crystal-lang
<RX14> unshadow, 160,000 fibers?
<unshadow> RX14: Yeha
<RX14> nice
<RX14> need to do any tricks?
<RX14> I had to change some sysctl variables to get over 32k fibers
<unshadow> My MS (Latency) was through the roof, but it worked
<unshadow> Oh, I have a specilaized sysctl and limits.conf for that
<unshadow> I can post it if you wana see
<RX14> I needed to change max_maps or something
<RX14> apart from that it was good
tzekid has joined #crystal-lang
<RX14> vm.max_map_count
<RX14> that was the one
<RX14> that's all net stuff
<RX14> i needed to increase vm.max_maps or just *starting* more than 32k fibers would fail
<unshadow> Yeha, as I said, Reverse Proxy, Each Fiber holds a socket
<unshadow> RX14: really ? I didn't see this issue
<unshadow> what OS?
<RX14> sure I get the utility of those but i'm just surprised you don't need to tweak max_map_count
<RX14> linux
<unshadow> What dist ?
<RX14> arch
<TheGillies> arch linux is only linux
<RX14> it's really not
<TheGillies> AUR is valhalla
<unshadow> max_map_count => This file contains the maximum number of memory map areas a process
<unshadow> may have. Memory map areas are used as a side-effect of calling
<unshadow> malloc, directly by mmap, mprotect, and madvise, and also when loading
<unshadow> shared libraries.
<RX14> unshadow, what does `sysctl vm.max_map_count` do for you?
<RX14> what's the curent value on your testing machine?
<unshadow> I'm running my server on Ubuntu , it was set to 65536 as default
<RX14> same here...
<unshadow> Wierd, not Can't sapwn fiber issues
<RX14> yeah that is weird
<unshadow> maybe you changed other stuff ?
<RX14> i don't think so
<RX14> unshadow, try spawning fibers in a loop which `sleep`
<RX14> and count how many
<RX14> this is what I got
<unshadow> Maybe because I changed the stuff in limits.conf ?
<RX14> no I changed all that too
<RX14> didn't work for me
<RX14> weird
<unshadow> Maybe Ubuntu and Arch just handles this a little diferent
<RX14> well if it works
<RX14> you should celebrate :)
<RX14> maybe it's a debian-specific patch
<RX14> although I do remember having to do the same when I was testing on my debian 8 server
<RX14> I started a scaleway server with 64GiB of ram and got to something like 5 million fibers
<RX14> was using something crazy like 100TiB of virtual memory
<RX14> half the memory usage was in the kernel's page tables
<unshadow> On my Arch I get to 32731 using default configs
<RX14> having fibers with only 1 page of stack is really inefficient
<RX14> unshadow, yeah that's the same for me
<unshadow> Testing On ubuntu now
<unshadow> RX14: Yap, made sure, I got 65530 at max_map_count on the server, and running the loop with array size I get to 500K before losing the patiance lol
<RX14> that's so strange
<RX14> a good surprise
<RX14> but a surprise nonetheless
<unshadow> It's interesting to see how libevent actually handles switching between so many fibers
<unshadow> when 500k actually do some work
<unshadow> I'll try to push My reverse proxy higher
<unshadow> Not sure why on My arch I'm getting the fiber error though
<unshadow> how high did you set it ?
<RX14> just insanely high
<RX14> high enough you'll never reach it
<unshadow> RX14: cool, did 99999999 now I'm getting to 554012 fibers before OOM XD
<FromGitter> <bew> awesome stuff!
<unshadow> Now spawning sockets with the fibers
<unshadow> 8 machines trying to create 65500 sockets against a single server
<RX14> where are you trying this?
<unshadow> Amazon :)
<unshadow> all machines at 21277 right now
<RX14> 8 machines sounds like way too much lol
<unshadow> so * 8
<unshadow> Sockets: [Opened: 28231, Failed: 2339, Requested: 150000]
<unshadow> Maximum I got , so 225848 Concurrent Sockets
<unshadow> So it's 225848 Fibers , each holding a socket
<unshadow> RX14: Too much depends on the customers needs XD
<unshadow> My comp is working only with enterprise , so ... thats +\- the requirments
<RX14> how do you do a reverse proxy with 1 fiber per socket?
<unshadow> It's 2 Fibers per socket TBH XD , read loop and write loop
<unshadow> Oh...
<unshadow> so it's 225848 * 2
<RX14> yup
<unshadow> fibers
<RX14> i thought so
<RX14> so ~500k
<RX14> 450
<unshadow> Yeha I guess
<unshadow> nice
<RX14> what's the limitation?
<RX14> memory?
<unshadow> Not sure, I think the Libevent just can't hold so much juggeling on one Thread ?
<unshadow> I don't see any OOM or Too many open file erros
<unshadow> I'll do more tests
<RX14> unshadow, now add reuse_port
<RX14> and open 1 process per core
<RX14> and then see how far you get then haha
<unshadow> RX14: I want to, but then I need to move the socket pool and handlers from a mem array to external DB like redis or something
<unshadow> I'll look at that
<RX14> it does more than a simple reverse proxy?
<unshadow> so all processes can share the cookies, auth status , etc..
<RX14> oh ok so this isn't just a basic TCP proxy for demo purposes
<unshadow> RX14: Oh yeha :) DPI, rewrite, cookie and session, authentication , and attack filtering
<RX14> oh that's cool
<RX14> really cool
<unshadow> RX14: Nope, full fledged product
<unshadow> yeha it's cool
<unshadow> And it's coller we moved from Ruby to Crystal for it
<unshadow> cooler*
<RX14> make sure you test it in production...
<RX14> i've had weird issues with crystal before
<unshadow> I'll leave that to the customers
<unshadow> XD
<RX14> ...
<unshadow> Kidding hahah
<RX14> yeha I thought so
<RX14> muscle memory always typos yeah -> yeha lol
<unshadow> It's battle tested, I had lots of issues, changes of architecture, new stuff to think about
<RX14> that's super cool that crystal has been so stable for you
<RX14> I deployed and needed to restart it every 2 hours :/
<unshadow> I use the msgpack-crystal gem too, It's really helping out
<RX14> shard*
<RX14> :)
<unshadow> LOL
<unshadow> true sorry
<unshadow> ruby dev
<unshadow> RX14: Did you manage to trace the issue for why restarts where needed ?
<RX14> no
<unshadow> would it just get stuck ? or OOM ?
<RX14> it just seemed that after a while all outgoing connect()s would time out
<unshadow> I had massive issues with, and IO.peek
<RX14> yeah you should never use
<RX14> IO#peak is cool though
<unshadow> It blocked all my Fibers
<unshadow> not sure why
<RX14> because it's just a dumb wrapper around the select syscall
<RX14> doesn't use libevent
<unshadow> Oh ...
<RX14> I should really send in a PR to remove it
<RX14> or at the very least deprecate
<RX14> actually why don't I do that now
<unshadow> RX14: do you work at manas ?
<RX14> nope
unshadow has quit [Quit: leaving]
tzekid has quit []
<FromGitter> <drosehn> or maybe rename it to `select_when_not_using_fibers` ?
cyberarm has quit [Quit: ZNC -]
cyberarm has joined #crystal-lang
kochev has joined #crystal-lang
<FromGitter> <bew> How to use macro method `TypeNode#instance_vars` I always get empty array:
<RX14> hmm
<RX14> looks like you got the metaclass typenode
<RX14> or maybe not
<RX14> nope you probably need a macro finished then
<RX14> nope no idea
<FromGitter> <bew> I was looking for that `finished` name, but it's not better..
<RX14> yeah
<RX14> weird
<RX14> damnit
<RX14> here I was thinking that oprypin was wrong about the SSL thing being a flush issue then i realise i was using crystal instead of bin/crystal
<RX14> yup adding a flush works
<FromGitter> <bew> @RX14 I still don't fully understand why, but using `macro def` worked:
<RX14> weird
<FromGitter> <bew> some "doc" here
<FromGitter> <bew> see the last comment
<FromGitter> <bew> this is... wow
<FromGitter> <bew> that what I was looking for :P
<FromGitter> <bew> I can get instance var names & types, but it doesn't seems possible to get the default value from a `MetaVar`...
<RX14> does crystal really have incompatible defaults between SSL::Socket::Server and SSL::Socket::Client
<RX14> nope, just me
<crystal-gh> [crystal] RX14 opened pull request #4386: Fix WebSocket negotiation (master...bugfix/ssl-socket)
<RX14> fixed websockets finally
<FromGitter> <sdogruyol> what
<FromGitter> <bew> :clap:
<FromGitter> <sdogruyol> how?
<RX14> how?
<RX14> I debugged it for an hour
<RX14> then oprypin gave me the solution in 2 minutes over irc
kochev has quit [Ping timeout: 240 seconds]
<FromGitter> <bararchy> haha
<FromGitter> <sdogruyol> i mean what's broken :P
<FromGitter> <sdogruyol> or how :D
<RX14> its a 1 line fix
<RX14> read the PR
<RX14> it's hillariously simple
<FromGitter> <sdogruyol> simple is always complex
<FromGitter> <bew> well, here simple is really simple x)
<FromGitter> <bararchy> Is there a way to set `IO.sync = true` or something to always flush ?
<RX14> yes but you don't want ot
<FromGitter> <bararchy> why ?
<RX14> not if you want performance
<FromGitter> <bararchy> Oh
<FromGitter> <bararchy> because it will block ?
<RX14> if you always wanted sync = true why would it be an option :)
<RX14> we buffer writes to sockets to reduce syscalls
<RX14> every syscall has a large overhead
<RX14> they require an interrupt, the kernel has to do a lot, the CPU cache gets busted a bit
<RX14> in general you want to minimise them
<FromGitter> <bararchy> Better yet then, Can I decide on a timeout for flush then ? as in , if no more data was read by X time, then flush ?
<RX14> not currently
<RX14> not in the stdlib
<RX14> you could spawn a fiber which did it every 10ms or so
<RX14> but that'd just mask issues
<RX14> instead of doing the correct thing
<RX14> which is to flush properly
<FromGitter> <bararchy> btw, when a Fiber flushes, as it is an IO action, it will move working on another Fiber right ?
<Papierkorb> maybe
<Papierkorb> it does when it would block
<FromGitter> <bararchy> So as long as I multi-fiber it's not that big of a performance hit
<RX14> flush can't be a performance hit because all it does is do the write calls that would be happening anyway
<RX14> if the buffers weren't there
<RX14> it could be a performance hit if you flush after every write but it still woun't be too abd
<RX14> just an extra memcopy
<FromGitter> <bararchy> I'll need to experiment with that, I have lots of small writes, and I see that sometimes it will give a little latency to the client
<RX14> well you need to figure out the points where you actually want to be sending data to the client
<RX14> it's usually just before you wait for a response from the other end
<RX14> or at the end of sending (but you should use .close then anyway, which would call flush)
<RX14> flushing is all about making sure the client gets the data you've just written *sometime* in the future
<Papierkorb> bararchy, if you have something like many small writes in a loop or so, you can try writing first into a IO::Memory and then send off its buffer as a whole. Sometimes it helps.
<RX14> Papierkorb, honestly IO::Buffered should be pretty much the same as memoryIO there
<Papierkorb> In practice this was not the case for me.
<RX14> how much?
<FromGitter> <bararchy> Thanks RX14, I'll go over the project and profile behavior with regards to flush\placment etc ..
<Papierkorb> no numbers, months ago. it also helped untangle some other code paths, but it was a noticable improvement
<RX14> hmm
<RX14> interesting
<RX14> Papierkorb, how much of these small writes were there
<FromGitter> <bararchy> Papierkorb Interesting idea
<RX14> was it more or less than the IO::Buffered buffer size?
<Papierkorb> many many writes of a few bytes
<FromGitter> <bararchy> What's the size of IO::Buffered buffer size ?
<RX14> 8KiB
<FromGitter> <bararchy> Can I change that ?
<RX14> not currently
<FromGitter> <bararchy> Hm.... I would love 32k
<RX14> why?
<RX14> that's half the L1 cache!
<FromGitter> <bararchy> Because in other cases I got file uploads and it's a waste to send small packets instead of big one
<Papierkorb> copying 1x32K is most likely faster than copying 4x8
<FromGitter> <bararchy> or maybe I'll just use nigel
<FromGitter> <bararchy> and let kernel handle it
<RX14> well the ideal would be to have it configurable
<FromGitter> <bararchy> True
<RX14> as the size would be highly dependent on the application
<RX14> but 32K is probably quite a big default
<Papierkorb> too large, 8KiB sounds fine as default without more hard data
<FromGitter> <bararchy> Yeha it's not one size fit all scenario
<Papierkorb> Maybe writing into a IO::Memory first could help you in this case then
<FromGitter> <bararchy> Maybe as `IO.buffer_size` or something
<RX14> i'd say it's be microseconds at best
<FromGitter> <bararchy> Would it ? the socket will still send it 8 k at a time
<Papierkorb> for low latency applications that's plenty
<Papierkorb> Nope, when you're writing a larger slice you sidestep the buffer
<RX14> sure but if you know you need to save microseconds on your latency you likely wouldn't be asking here
<RX14> or using a GCed language for that matter
<Papierkorb> GC isn't the issue, more often than not you have free time in between requests where you can free stuff easily
<FromGitter> <bararchy> Not true :) ⏎ Sleep as Ruby , Fast as C no ? ;)
<FromGitter> <bararchy> sleek*
<Papierkorb> Can be faster in result than not having a GC approach to it for this case
<RX14> for throughput sure
<RX14> but latency? people with latency-sensitive apps would be much better using something like rust
<RX14> and traditionally use either C++/C
<FromGitter> <mverzilli> or think of their particular context of use, like here:
<FromGitter> <bew> lool
<FromGitter> <sdogruyol> What's up
pduncan has quit [Ping timeout: 240 seconds]
<RX14> that's a particularly hilarious message
<crystal-gh> [crystal] mverzilli closed pull request #4369: Pass Severity instead of String to Logger::Formatter (master...fix-logger-formatter-severity)
kochev has joined #crystal-lang
<FromGitter> <deepj> I’d like to implement this plugin system in Crystal -> But I don’t know how I can store module in a hash similarly likewise there.
<FromGitter> <deepj> I’ve tried this but without any success
<FromGitter> <deepj> ```code paste, see link``` []
kochev has quit [Remote host closed the connection]
<FromGitter> <deepj> But it causes this error
<FromGitter> <deepj> ```code paste, see link``` []
<oprypin> deepj, why `Class`? Can't it be more specific? `Module` might just work
yogg-saron has joined #crystal-lang
<FromGitter> <bew> `Module` doesn't exist
<oprypin> mkay
<oprypin> deepj, let's think of it this way: if you're making a plugin system, you probably expect these modules to implement some methods
<oprypin> so why not make a concrete module with abstract methods and accept those
<FromGitter> <deepj> @oprypin Yes, `Module` doesn’t exist. I was my first suprise. I’d expect there is Module object in Crystal similarly like in Ruby
<FromGitter> <bew> @deepj after reading your linked post on Roda, it seems that it rely heavily on dynamic code evaluation (methods override, etc..) which is not supported by crystal.. Plus as crystal allows you to re-open any class/module and redefine everything, I don't see where/how something like roda could be useful in crystal
Yxhuvud has quit [Remote host closed the connection]
Yxhuvud has joined #crystal-lang
<Papierkorb> bew, because Roda allows you to specifically override stuff in certain instances only
<Papierkorb> if your "plugin system" were to simply override stuff, that'd make it application global
<FromGitter> <bew> "Roda allows you to specifically override stuff in certain instances only" hmmm ok, then I have no idea how it could possibly be done in crystal (if it's even possible!)
<Papierkorb> inheritance and interface models
<Papierkorb> to some extend. however, roda tries hard to use something like that so you still can use `super` and the like
elia has joined #crystal-lang
yogg-saron has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bmcginty_ has quit [Ping timeout: 260 seconds]
yogg-saron has joined #crystal-lang
yogg-saron has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
sjaustirni has joined #crystal-lang
elia has quit [Quit: (IRC Client:]
zipR4ND1 has joined #crystal-lang
zipR4ND has quit [Ping timeout: 240 seconds]
Raimondii has joined #crystal-lang
Raimondi has quit [Ping timeout: 268 seconds]
Raimondii is now known as Raimondi
sjaustirni has quit [Remote host closed the connection]
bmcginty has joined #crystal-lang
asterite_ has joined #crystal-lang
RX14- has joined #crystal-lang
mjblack- has joined #crystal-lang
miketheman_ has joined #crystal-lang
jhass|off has joined #crystal-lang
RX14 has quit [*.net *.split]
Lex[m]1 has quit [*.net *.split]
jhass has quit [*.net *.split]
asterite has quit [*.net *.split]
mjblack has quit [*.net *.split]
thelonelyghost has quit [*.net *.split]
TheGillies has quit [*.net *.split]
miketheman has quit [*.net *.split]
asterite_ is now known as asterite
mjblack- is now known as mjblack
miketheman_ is now known as miketheman
jhass|off is now known as jhass
onec has joined #crystal-lang
TheGillies has joined #crystal-lang
Lex[m]1 has joined #crystal-lang
thelonelyghost has joined #crystal-lang
onec has quit [Remote host closed the connection]
onec has joined #crystal-lang
onec has quit [Ping timeout: 260 seconds]
pduncan has joined #crystal-lang
pduncan has quit [Ping timeout: 258 seconds]
onec has joined #crystal-lang
zipR4ND1 has quit [Ping timeout: 240 seconds]