<crystal-gh>
[crystal] MakeNowJust opened pull request #5354: Fix the scope to look-up the constant as the receiver type of the macro (master...fix/crystal/lookup-scope-in-macro) https://git.io/vbnsz
DTZUZO has joined #crystal-lang
<crystal-gh>
[crystal] faustinoaq opened pull request #5355: Avoid empty author and email (master...patch-1) https://git.io/vbnGX
<FromGitter>
<Lispre> I love crystal
<FromGitter>
<cyclecraze_twitter> Can anyone provide a status update on 1.0?
aroaminggeek[awa is now known as aroaminggeek
aroaminggeek is now known as aroaminggeek[awa
DTZUZO has quit [Ping timeout: 260 seconds]
aroaminggeek[awa is now known as aroaminggeek
illyohs has quit [Ping timeout: 240 seconds]
DTZUZO has joined #crystal-lang
snsei has joined #crystal-lang
snsei has quit [Remote host closed the connection]
snsei has joined #crystal-lang
rohitpaulk has joined #crystal-lang
snsei has quit [Ping timeout: 240 seconds]
<FromGitter>
<jwaldrip> Been optimizing my router implementation. The following benchmarks compare kemal <-> orion (mine), why would the req/s be higher in orion, but the latency be higher too? ⏎ ⏎ `````` [https://gitter.im/crystal-lang/crystal?at=5a27813387680e6230d82c7c]
aroaminggeek is now known as aroaminggeek[awa
rohitpaulk has quit [Ping timeout: 260 seconds]
snsei has joined #crystal-lang
rohitpaulk has joined #crystal-lang
aroaminggeek[awa is now known as aroaminggeek
rohitpaulk has quit [Ping timeout: 255 seconds]
alex`` has joined #crystal-lang
rohitpaulk has joined #crystal-lang
snsei has quit [Remote host closed the connection]
<FromGitter>
<yxhuvud> my guess is that you create more objects that invoke the GC more often
DTZUZO has quit [Ping timeout: 248 seconds]
<FromGitter>
<yxhuvud> It would be nice if that thing showed the median latency too - averages is not that interesting for latency as the distributions tend to not make averages relevant
rohitpaulk has quit [Ping timeout: 248 seconds]
rohitpaulk has joined #crystal-lang
claudiuinberlin has joined #crystal-lang
mark_66 has joined #crystal-lang
Ven`` has joined #crystal-lang
Ven`` has quit [Client Quit]
<FromGitter>
<yxhuvud> and measuring stdev for latency is utterly useless as latency is basically never having a standard distribution.
dragonkh has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
aroaminggeek is now known as aroaminggeek[awa
rohitpaulk has quit [Ping timeout: 240 seconds]
rohitpaulk has joined #crystal-lang
Papierkorb_ has joined #crystal-lang
snsei has quit [Ping timeout: 255 seconds]
<FromGitter>
<marin117> Hi everyone
<FromGitter>
<marin117> can anybody help with creating JSON of JSONs
<FromGitter>
<marin117> to be precise
<FromGitter>
<marin117> JSON with array of JSONs
<FromGitter>
<crisward> Just found this issue when passing bcrypt a non-bcrypt hash - https://play.crystal-lang.org/#/r/37f7 I'm guessing this is a bug WDYT?
<Papierkorb_>
crisward, though what's not a bug is it expected a correctly formatted string. That's what `.create` is for. It can't reasonably distinguish a hash string from a password string
<FromGitter>
<marin117> and this is not working on client
<Papierkorb_>
Please always attach the error message(s) you receive
<Papierkorb_>
In any case, please use JSON.mapping
<Papierkorb_>
It'll make that *much* easier to use, while giving you free type safety.
<oprypin>
marin117, supposed to be `response.body_io.not_nil!.gets_to_end`
<oprypin>
(what a terrible construct tho)
<Papierkorb_>
Or just use `response.body`
<FromGitter>
<marin117> @oprypin yeah I know, but hopefully this is just for now, it is just prototype version (kinda)
<FromGitter>
<marin117> @Papierkorb thank you! I will try something like that
<oprypin>
what, you want to get an error just for now?
<crystal-gh>
[crystal] ysbaddaden opened pull request #5356: Fix bcrypt hard limit on passwords to 71 bytes (master...std-fix-bcrypt-password-size-to-maximum-71-bytes) https://git.io/vbnPY
<FromGitter>
<Sija> hi @all! any1 knows for when Crystal v0.24 is planned finally?
<FromGitter>
<crisward> @Papierkorb I've added this to my code, to check the string before passing it. It's probably not perfect, but works for my usecase ⏎ ⏎ ``` def self.is_bcrypt(str) ⏎ str[0,4]=="$2a$" && str.size > 40 ⏎ end``` [https://gitter.im/crystal-lang/crystal?at=5a27c6e73ae2aa6b3f91f7e0]
<oprypin>
:|
<FromGitter>
<crisward> I'll raise an issue
encrypted has joined #crystal-lang
encrypted has quit [Client Quit]
encryptedf has joined #crystal-lang
<Papierkorb_>
crisward, that doesn't belong into the BCrypt module at all
<Papierkorb_>
Please don't use it on your end either.
<Papierkorb_>
Guessing and Security do NOT go together
<Papierkorb_>
Simply use the correct method for each use-case. You know in your code what you have anyway. If not, fix that so you do know.
<oprypin>
agree
alex`` has quit [Quit: WeeChat 1.9.1]
<FromGitter>
<yxhuvud> It could be a good idea to *verify* that it is indeed having the correct format somewhere though.
<Papierkorb_>
Yeah that's what I complained about above
<FromGitter>
<yxhuvud> ie it is good for generating good error messages, not for figuring out what to do
alex`` has joined #crystal-lang
rohitpaulk has quit [Ping timeout: 240 seconds]
<FromGitter>
<crisward> It's just a sanity check, to make sure I don't get an uncaught error when using the bcrypt lib. A bit like checking an email is valid if it has an '@' in it. It's just results in a 'your username and password have not been found'. The hashes are coming from the database so should be ok. Some had been 'imported' from elsewhere, hence the issue. The db has like 5 accounts in it, so not too concerned.
<Papierkorb_>
I still wouldn't want that to even exist in the stdlibs BCrypt. There's simply no reasonable use-case outside of a one-time transfer from a legacy system.
<Papierkorb_>
Those who "know security" wouldn't use it as they view it as insecure and not worthwhile. Those who "don't know security" don't understand the inherent risks of carelessly using it, which will make it easier for them to create security vulns in their authentication systems they'll inadvertently write at some point
<Papierkorb_>
Even if documented, the users in the latter category tend to simply not read it. For many things I say "Their loss", but that doesn't apply if the 'reasonable' use-cases are already slim
<FromGitter>
<crisward> sorry, we're talking at cross purposes. I'm not proposing we add that crap to standard lib. It's just a local sanity check I'm doing.
<FromGitter>
<crisward> in the standard lib, just checking the parts.size before examining them would be enough
<Papierkorb_>
Yeah for that it's fine-y, was also referring to the general case
rohitpaulk has joined #crystal-lang
encryptedf has quit [Ping timeout: 260 seconds]
rohitpaulk has quit [Ping timeout: 264 seconds]
rohitpaulk has joined #crystal-lang
DTZUZO has joined #crystal-lang
foxxx0 has joined #crystal-lang
<foxxx0>
hey, what do i need to do in order to pass an Array(String) to a function?
<foxxx0>
i receive the following error: error| instantiating 'part1(Array(String), Hash(String, Int32))'
<Papierkorb_>
foxxx0: Your code?
<foxxx0>
do i need to change something at my def part1(input, seen)?
<foxxx0>
do you participate at the advent of code? it might be a spoiler! :D
<Papierkorb_>
You need to handle that case when it is
<foxxx0>
i need to find the max value in the Array and its index
<Papierkorb_>
How depends on your need. You can `raise if i.nil?`, or `return if i.nil?` .. or default `i = foo.index{ .. } || 0`
<Papierkorb_>
Yes, but what if you don't find anything?
<Papierkorb_>
"Impossible with that algorithm" doesn't exist, you have to proof it to the compiler
<foxxx0>
hm hm
<RX14>
wow thats inefficient lol
<RX14>
iterating for the max
<RX14>
then iterating again to find it's index
<foxxx0>
in ruby with a hash it's actually pretty fast
<Papierkorb_>
RX14: I already pointed out the potentional for refactoring :P
<foxxx0>
surprisingly fast even
<Papierkorb_>
I mean it being slow isn't even the real problem with the code
<foxxx0>
please throw feedback at me
<foxxx0>
always happy to learn
<Papierkorb_>
DRY would be a start
<Papierkorb_>
Using enumerators instead of hand-woven while loops as second step
<Papierkorb_>
as in doing stuff like `blocks.downto(1){ .. }`
<Papierkorb_>
And then recheck if a standard #each, or similar, doesn't suffice
<Papierkorb_>
In other words: If you're about to type `while`, stop and check your algorithm, if you actually need full control on the execution of the loop, or if one of the various #each-esque methods don't do the trick more easily
<RX14>
oh wow you use += 1 in the right hand side of a ternary expression
<RX14>
thats just horrible
<Papierkorb_>
And it's also duplicated few lines down
<Vexatos>
does the += function return the old or the new value in crystal
<RX14>
new
<Vexatos>
good
<Vexatos>
>_>
<RX14>
well duh
<Vexatos>
I've seen worse
<RX14>
because it's foo = foo + x
<Papierkorb_>
`a += b` == `a = a + b`, thus the new one is the reasonable consequence
<RX14>
and = returns the rhs
<Vexatos>
actually it's it's foo = foo + (x)
<RX14>
ok Vexatos
<RX14>
you can go back to #WAMM now
<Papierkorb_>
It's not, it's the AST.
<RX14>
also this
<Vexatos>
huh.
<foxxx0>
you would rather use: i = ( i == (banks.size - 1) ? 0 : (i + 1) )?
<RX14>
i'd rather you used an if
<Vexatos>
what would that be? Like 0 if i == banks.size - 1 else i + 1
<RX14>
ternary statements with mutations AND ternary statements which don't have their expression value used are terrible
<RX14>
no Vexatos
<Vexatos>
I don't even know crystal syntax :3
<RX14>
if i == (banks.size - 1)
<RX14>
i = 0
<RX14>
else
<RX14>
i += 1
<RX14>
end
<foxxx0>
yeah
<Vexatos>
that looks sane
<Papierkorb_>
foxxx0: I stated at the beginning, that I don't participate in the Advent Of Code, remember? Now, looking at your code, I have *no* idea what's going on. I can't infer what your algorithm is solving.
<Vexatos>
RX14, selene would just have i += i == banks.size - 1 and 0 or 1
<Vexatos>
:3
<foxxx0>
publicly visible
<foxxx0>
that's the task
<RX14>
yes but Vexatos thats also unreadable
<RX14>
thats just bad
<RX14>
lua is bad
<Vexatos>
foxxx0, that task was super slow for me because I store a new copy of the array every single iteration :D
<foxxx0>
Papierkorb_: and in this case the endless loop is desired until an abort condition is met
<RX14>
at least you're not using bogosort to find the max Vexatos
<foxxx0>
Vexatos: my ruby solution perform suprinsignly well
<Papierkorb_>
And that I can't at all might either be because 1) I don't know the underlying algorithm principles or 2) the code is hard to read. 1) is always a possibility, but I guess 2) is the case. "quick & dirty" code doesn't teach nor show anything, for your or anyone. it's saying "I don't care really", in which case I wonder why one would write code in the first place. Sounds harsh, but I really dislike "quick & dirty" :)
<Vexatos>
the max is
<foxxx0>
performs*
<Vexatos>
uuh
<Vexatos>
foldl((ac, x) -> if input[x] > input[ac] x else ac end, eachindex(input))
<RX14>
i hope thats in a function Vexatos
<Vexatos>
hm?
<RX14>
also why using index
<RX14>
instead of you know
<RX14>
folding over input
<RX14>
like a normal fold
<Vexatos>
because I needed the index of the max, not the value
<RX14>
oh yeah duh
<Vexatos>
literally the next line is >stack = input[pos]
<foxxx0>
Papierkorb_: i hear you
<Vexatos>
My solution is slow and ugly, it takes like seven seconds :I
<Papierkorb_>
In what? Haskell?
<Papierkorb_>
Haskell is slow
<Vexatos>
Julia
<Vexatos>
And julia is really fast
<RX14>
lies
<Vexatos>
I am pretty sure 99.9% of that is allocating thousands of arrays
<Vexatos>
But I might be wrong >_>
<Vexatos>
actually no, that takes almost no time at all
<Vexatos>
I wonder which is the slow part
<RX14>
banks.each_index.reduce(0) { |acc, x| banks[acc] > banks[x] ? acc : x } right Vexatos
<Vexatos>
ooooh
<Vexatos>
it's the array comparison
<Vexatos>
RX14, no
<Vexatos>
must be >=
<Vexatos>
it always prefers acc
<RX14>
why?
<RX14>
like
<Vexatos>
because it prefers the left-most value
<Vexatos>
if it is equal
<RX14>
do you need the first
<Vexatos>
which is why I fold left
<RX14>
max
<Vexatos>
yes
<RX14>
o-okay
<Vexatos>
Okay, the slow part is find(stored -> stored == input, storage)
<Vexatos>
i.e. going through an array of thousands of arrays to find out whether the current array already has been encountered before
<Vexatos>
which is the condition for breaking the loop
<Vexatos>
I wonder if I could make that call any faster
hightower2 has joined #crystal-lang
<RX14>
but at that point
<RX14>
just use a loop
<Vexatos>
aaand there we go
<Vexatos>
reduced it from 7 seconds to 0.7 seconds
rohitpaulk has joined #crystal-lang
<RX14>
def max_index(array); max = array.first; index = 0; 1.upto(array.size) do |candidate_index|; if array[candidate_index] > max; index = candidate_index; max = array[candidate_index]; end; end; end
<Papierkorb_>
Mh a `Iterable#keep` would be neat which would only pass on what its blocks is truthy (or falsy) for to do `foos.keep{|x| x > 5}.map{|x| x * x}`
<Papierkorb_>
Err, pass on if match, and directly return if no match
<Vexatos>
Apparently find is a piece of garbage... for some reason? I replaced it with a for loop and it's an order of magnitude faster
<Vexatos>
what
<RX14>
Papierkorb_, you mean
<RX14>
select
<Papierkorb_>
No
<RX14>
uhh
<Vexatos>
does crystal have a macro for timing calls?
<RX14>
what do you mean then Papierkorb_
<Papierkorb_>
`[1,2,3,4].keep{|x| x > 2}.map{|x| x * x}` -> [1,2,9,16]
<RX14>
Vexatos, Time.measure in 0.24.0
<RX14>
before that you need to import benchmark
<RX14>
and use Benchmark.measure
<Vexatos>
like I can just throw @time in front of any block or expr
<oprypin>
Papierkorb_, are you tired? lol
<Vexatos>
or call
<Papierkorb_>
oprypin: ?
<RX14>
Vexatos, that's an optimization for a really rare thing tbh
<RX14>
and - you can always write one
<Vexatos>
@time is one of the most useful things in julia
<Vexatos>
Makes it super easy to time individual calls to find out which one is the slow one
<RX14>
Vexatos, macro time(expr); time = Benchmark.realtime{ {{expr}} }; puts {{expr.stringify}} + " took #{time}"; end
<RX14>
just add the word time before an expression
<RX14>
done
<Vexatos>
does it also produce the number and size of allocations done?
<RX14>
oh and that macro needs to capture the return value of expr
<Vexatos>
rather, is there a way to do that
<RX14>
no Vexatos
<RX14>
you have to look at GC.stats
<Vexatos>
>0.765151 seconds (56.18 k allocations: 3.444 MiB)
<RX14>
cool
<Vexatos>
it actually is
<Vexatos>
but then again
<Papierkorb_>
Didn't follow, is the same algorithm 10x faster on Cr than Julia?
<Papierkorb_>
*as fast
<Vexatos>
julia is 230MB in size
<foxxx0>
okay
<Vexatos>
Papierkorb_, no I just replaced a find() with a for loop and it became ten times faster for who knows the reason
<Papierkorb_>
foxxx0: Common pitfall: Hash#includes? checks for the key-value-pair, not a key. You meant Hash#has_key?
<RX14>
Hash is enumerable og {k, v}
<RX14>
on*
<foxxx0>
Papierkorb_: ooh
<foxxx0>
Papierkorb_: aha! thanks
<Papierkorb_>
foxxx0: Another solution is using `Hash#[]?`, which is semantically the same if you don't have falsy values. If you're unsure, sue has_key
<oprypin>
Enumerable({K, V})
<Papierkorb_>
use has_key? *
<foxxx0>
it works
<foxxx0>
now i need to figure out why it dislikes my part2
<Papierkorb_>
> no overload matches 'part2' with types Array(Int32), Hash(Array(Int32), Int32)
<Papierkorb_>
You read this error like this:
<Papierkorb_>
"no overload" = you provide one (or more) methods of the name "part2", but nothing of them work for you
<Papierkorb_>
"with types Array(Int32), Hash(Array(Int32), Int32)" you want to pass a `Array(Int32)` as first argument, and `Hash(Array(Int32), Int32)` as second
<Papierkorb_>
Under that, it shows you all overloads that are available
<Papierkorb_>
Your job is now to either provide a new part2 method, or fix the call site.
<Papierkorb_>
The first argument you want to pass is a Array(Int32). The only overload you provide expects a String as first argument. Clearly, those types are not the same. Hence it complaining
<Papierkorb_>
In your case, you probably want to fix the part2 function to accept that array instead of a string, as the string stuff was just an artifact of your String#join from before
rohitpaulk has joined #crystal-lang
<foxxx0>
ah derp
<foxxx0>
yes
<foxxx0>
this is actually pretty nice
<foxxx0>
the compiler already prevents you from using wrong argument datatypes
<foxxx0>
last question
<foxxx0>
in ruby i can do something like: my_duration = Benchmark.realtime do { sleep(5) }
<foxxx0>
to have a measurement with even more precision that Time.now.epoch_ms
<foxxx0>
i was unable to get that to work ini crystal
<FromGitter>
<bew> ah yes, that would be good too!
<FromGitter>
<bararchy> RX14, wouldn't having a 0.24 milestone in the git with all relevant issues\PR's taged against it be a better way to "track" the status of a new version?
<RX14>
you can't have conversations on milestones
<RX14>
we have the next milestone which tracks the release
<RX14>
this is an issue which happens *right before* release
<RX14>
to discuss it and make sure we all agree on releasing it
<FromGitter>
<bararchy> Oh, I see
<FromGitter>
<bararchy> ok, makes sense. So this issue is more of a "RFC" on the new release ?
<RX14>
well
<RX14>
we'd have to PR in the changelog diff *anyway*
<RX14>
and we have to do that before the release anyway
<RX14>
so why not make it formally the place to discuss the new release
<RX14>
instead of the inevitable informal discussion which would happen anyway
<FromGitter>
<bararchy> ^ on multiple untracked issues
<FromGitter>
<bararchy> yeha
<FromGitter>
<bararchy> hahha
<oprypin>
RX14, you dont need to put links to crystal/issues/\1 like you said
<RX14>
yes we do
<oprypin>
sdogruyol did it for whatever reason in his PR but that was never needed before
<RX14>
I don't want to rely on GFM
<oprypin>
thats stupid
<RX14>
it's not on github in terms of being a issue or PR
<oprypin>
you would be making markdown unreadable for the purpose of making links explicit but nobody clicks links in raw markdown
<RX14>
I do
<FromGitter>
<bew> what is "GFM"?
<RX14>
and i have been
<RX14>
for the last
<RX14>
idk
<RX14>
15 mins
<oprypin>
how is that
<RX14>
by opening it in emacs
<RX14>
and clicking the links?
<oprypin>
there aren't even any links, what were you clicking?
<oprypin>
RX14, then what's the point of the text "#5096" ?
<oprypin>
might as well wrap the whole line in the link
<RX14>
so that it reads nicely when it IS rendered
<RX14>
some lines have multiple PRs oprypin
<RX14>
can't do that
<oprypin>
it already reads nicely when rendered, you just insist on making it read horribly when not rendered
<oprypin>
not to mention a ton of unnecessary work maintaining this
<FromGitter>
<bew> which work? the only work is when adding, there's nothing to maintain imo
<FromGitter>
<bew> @oprypin can you disable text wrapping in your editor/text-viewer/pager? this way you see properly the texts, and the long line with the link is hidden or on the right, not cluttering your view
<RX14>
none of them wrap on 1080p
<oprypin>
it is cluttering the view because of varying line widths
<RX14>
unless you split
<FromGitter>
<bew> removing links just to make it uncluttered when you're looking at the markdown is non-sense, if it's really an issue for you, configure your viewer to hide links or something
<RX14>
oprypin, would you prefer "Compiler bugfixes"
<FromGitter>
<jwaldrip> Is there a way to see the objects created in crystal? i.e. something equivalent of the ruby ObjectSpace object?
<oprypin>
RX14, i dont know. doesn't sound so good, now that you mention it
<FromGitter>
<jwaldrip> bummer
<RX14>
exactly
<FromGitter>
<jwaldrip> any LLVM tools?
<RX14>
for what @jwaldrip
<FromGitter>
<jwaldrip> Just looking at what objects are being created
<RX14>
oprypin, "compiler bugs fixed" and "documentation fixes" are in the same tense
<FromGitter>
<jwaldrip> and how many there are
<RX14>
it's just english being weird
<oprypin>
RX14, there's no tense actually, just the object
<RX14>
@jwaldrip not really
<oprypin>
i see your point
<RX14>
oprypin, there is, "bugs fixed" "bugs being fixed" "bugs to be fixed"
<FromGitter>
<jwaldrip> Im trying to figure out why my HTTP router can do more req/s than kemal... but at the same time the response times are higher...
<RX14>
there is an implicit past tense you need more words to break out of
<FromGitter>
<jwaldrip> Which seems counterintuitive...
<FromGitter>
<jwaldrip> Someone suggested that it could be the GC collecting objects...
<RX14>
@jwaldrip i assume you benchmarked on the same machine
<FromGitter>
<jwaldrip> I will have to benchmark that as well
<FromGitter>
<jwaldrip> @unreadable I was actually thinking about using the method in the tree lookup as well, but implemented the HTTP constraints instead.
<FromGitter>
<jwaldrip> Its a tradeoff for speed for sure, but gives you far more flexibility to do multi-method actions/routes without duplicating an item in the tree
<RX14>
i think radix.cr is poorly optimized
<RX14>
it's so much code
<FromGitter>
<jwaldrip> RX14, I would love for you to take a look at my implementation
<FromGitter>
<eliasjpr> @RX14 More code does not translate to poorly optimized
<FromGitter>
<eliasjpr> IMO
<FromGitter>
<jwaldrip> What makes the radix implementation complex is the fact that there are dynamic parts to the tree
<RX14>
It very very often does
<Papierkorb>
No opinion needed: Ask a profiler.
<RX14>
I don't see why having dynamic parts is hard
<FromGitter>
<jwaldrip> @eliasjpr @RX14, I did borrow quite a bit of code from the original radix implementation. I just moved some things into additional classes to make things more readable.
<FromGitter>
<jwaldrip> i.e. Walker && Analyzer
<RX14>
If you end up in a node which is dynamic, look up how its dynamic, execute that against the route, then after that there's another radix tree describing the remainder of the route
<RX14>
Does anyone know any real world router benchmark
<RX14>
With real routes and real loads
<Papierkorb>
Your router needs to be really crappy for it to actually slow your application down
<FromGitter>
<jwaldrip> The techempower ones?
<FromGitter>
<elorest> @jwaldrip Would it be possible to add the improvements back to radix, and then extend it for your purposes in a different project?
<RX14>
That is nowhere near real world lol
<FromGitter>
<jwaldrip> Maybe...
<FromGitter>
<elorest> :)
<RX14>
Papierkorb: you'd be surprised
<FromGitter>
<eliasjpr> @elorest where is your take on the rounting, you had a branch somewhere with a case statement and regex
<FromGitter>
<eliasjpr> Maybe you can share it here to see whats people take on that implementation
<FromGitter>
<eliasjpr> it was super simple and it performs
<Papierkorb>
RX14: Then those routers are crap
<Papierkorb>
unecessarily being fully dynamic or whatever
<FromGitter>
<elorest> I built this into the benchmark on amber_router WIP.
<FromGitter>
<elorest> Using case statements works well but I assume it will break down as more routes are added.
<Papierkorb>
with one case per level, why would it be that bad under real world loads?
<FromGitter>
<eliasjpr> Is there any pattern matching capabilities like the ones you will find in functional languages built into crystal?
<Papierkorb>
within case there is
<Papierkorb>
so the most interesting part is there
<FromGitter>
<eliasjpr> Gotcha
Hates_ has joined #crystal-lang
<FromGitter>
<elorest> <Papierkorb>More case statements to choose from will result in slower lookup times correct?
<FromGitter>
<eliasjpr> I wonder if generating a case statement file based on routes definitions if that does well
<Papierkorb>
elorest, as I said, one case per level. For the normal case of having just a handful of paths there, it will be faster than doing a dynamic lookup
<FromGitter>
<elorest> I’m worried that with my solution, a project with 200 routes might be slower.
<Papierkorb>
.. you have 200 routes in a single sub-path level?
<FromGitter>
<eliasjpr> Real world thats very possible
<FromGitter>
<eliasjpr> generating dynamic routes for instance
<Papierkorb>
what do you mean with "dynamic"?
<FromGitter>
<eliasjpr> generating routes from a DB for instance
<Papierkorb>
Then that's no business for a compile-time static structure anyway
<Papierkorb>
But those are local to a single level, other levels can still make use of static structures
<FromGitter>
<eliasjpr> I get your point
csk157 has quit [Ping timeout: 260 seconds]
<Papierkorb>
And for that, a Hash(String, ..) gets as good as it will get without doing weird things.
<Papierkorb>
I'm not sure if a Trie would be better for that. even if, memory access may easily kill any time benefit from it
aroaminggeek has quit [*.net *.split]
DTZUZO has quit [*.net *.split]
kosmonaut has quit [*.net *.split]
oprypin has quit [*.net *.split]
Liothen has quit [*.net *.split]
dom96 has quit [*.net *.split]
pabs has quit [*.net *.split]
jsn- has quit [*.net *.split]
Guest34371 has quit [*.net *.split]
DTZUZO has joined #crystal-lang
dom96 has joined #crystal-lang
dom96 has quit [Changing host]
dom96 has joined #crystal-lang
oprypin has joined #crystal-lang
faustinoaq has joined #crystal-lang
<Yxhvd>
I can definitely see first or possibly second level down having a lot of entries, but that is it. It could be worth having a solution that adjust the solution to how many alternatives there are
aroaminggeek has joined #crystal-lang
kosmonaut has joined #crystal-lang
Liothen has joined #crystal-lang
jsn- has joined #crystal-lang
Guest34371 has joined #crystal-lang
pabs has joined #crystal-lang
faustinoaq has quit [Quit: IRC client terminated!]
<FromGitter>
<paulcsmith> I think optimizing the router will give you very little in terms of real world performance gains once you are generating actual responses. In my tests it's been <1% of the total response time when actually generating a response or querying the database
<RX14>
and they do it right: an array of indeces and an array of children - usong soa instead of aos and they use bytes
<FromGitter>
<paulcsmith> Not to say it shouldn't be optimized, just that it will probably result in minimal real world performance gains
<RX14>
@paulcsmith yeah I think so too
<RX14>
but yet they do seem to knock off a fair bit of performance
<RX14>
idk how kemal seems to do like 50% of raw HTTP::Server though lol
<FromGitter>
<paulcsmith> I wonder if that's jsut from the router. I'll try some benchmarks and see if it is that or some of the middleware. Could the method override stuff or one of the other middlewares that is causing the slowdown
<FromGitter>
<paulcsmith> I would bet that the 50% slowdown would trun into a <10% slowdown once you start making some db requests, rendering more complex JSON or HTML, etc. But I'm not sure. I'll try to run some numbers
aroaminggeek is now known as aroaminggeek[awa
<Yxhvd>
accounting and minimizing allocation during the request lifetimes could possibly also be worth investigating
<Papierkorb>
^
<Papierkorb>
You could try using trashman (I never used it myself though) to check it without a fullblown profiler
<Yxhvd>
nothing like GC to kill latency percentiles and throughput.
<RX14>
if it kills throughput why are we within like 30% of these hyperoptimized web frameworks which have had man years put into them on frameworkbenchmarks
<RX14>
when our HTTP server is "lol use string.split who cares about allocating this extra array"
<RX14>
it's a myth
<RX14>
latency sure
<RX14>
throughput, nah
<Yxhvd>
depends on heap size. Perhaps not relevant on these small heap sizes.
<Yxhvd>
at some heap size point you need generational GC to not suck, but I've seen 30s pauses even with that (with a long lived heap that was huge, and a zillion threads).
<RX14>
30s pauses?
<RX14>
thats ridiculous
<Papierkorb>
Is there a stresstest for boehm?
<Yxhvd>
That is what you can get if you have a huge heap and really long lived objects and are running really close to the memory limits on your system.
<Yxhvd>
(this was java btw)
csk157 has joined #crystal-lang
csk157 has quit [Ping timeout: 240 seconds]
aroaminggeek[awa is now known as aroaminggeek
<FromGitter>
<unreadable> would there be a significant performance between gc crystal and self managed memory crystal?
<Papierkorb>
what's written in doesn't matter much. What does is its algorithm/data structure in practical terms, and what it "knows"/it can work with in theoretical ones
<oprypin>
that would not be crystal
<FromGitter>
<unreadable> that would not answer my question though...oO
<Yxhvd>
unreadable: depends on how much memory you have. GCs tend to have less overhead when they are allowed to use a little more memory than absolutely needed.
<FromGitter>
<unreadable> how may I PM you oprypin?
<oprypin>
irc, email, i dunno
<oprypin>
why though
<FromGitter>
<unreadable> well, sfml based stuffs
<FromGitter>
<unreadable> but don't wanna make offtopic
<FromGitter>
<unreadable> there's actually 1 stuff that's messing me up
<FromGitter>
<unreadable> applying shader on a shape doesn't use the shape as canvas, it'll take the window instead
<FromGitter>
<unreadable> not sure if it's because gl_position or the way I pass the resolution uniform oO
<oprypin>
but i've never used shaders other than copying the official example
<FromGitter>
<unreadable> I like them because it gives me a lot of control and they're actually hard
<FromGitter>
<unreadable> not too**
<FromGitter>
<paulcsmith> Just benchmarked raw crystal and got about 111k requests. Added a router with 1 route (to keep it even) and got 108k. Not super scientific, but my assumption is the router doesn't have that big of an impact.
<FromGitter>
<paulcsmith> I did another test for just the router. Added 3000 routes. Matched against 7000 routes. Finished in 7.1ms
<RX14>
@paulcsmith yeah honestly i didnt think it was that high
<RX14>
but somehow the frameworks SUCK
aroaminggeek is now known as aroaminggeek[awa
<FromGitter>
<unreadable> raze seems to do pretty good