<lf94>
I know we are in a Racket channel but I'm not sure where else is theory focused :(
<lf94>
I guess the benefit of the output function is, it 'understands' the state
<lf94>
i.e. if it returns 1, it means we are at the final state
<lf94>
of course it can do anything though
<jcowan>
In a practical sense the representation of state is generally some sort of enumeration, whereas you want the output to be useful to somebody who probably doesn't care about your internal states.
<lf94>
yea yeah
<lf94>
output function allows the user to not have to interpret the state themselves
<lf94>
> enumeration
<lf94>
unfortunately js doesn't have this. What I did is the closest thing :)
<lf94>
jcowan, and then I discovered something else the other day
<lf94>
Why not make the state itself a transition function?
<lf94>
q1(a) -> q2
<lf94>
q2(a) -> q3
<lf94>
q3(a) -> q3 (final state)
<lf94>
I guess it could be done but introduces some limitations
<jcowan>
In properly tail recursive languages it's common not to encode the state at all: you have one function per state, and to transition to another state, just call its function.
<lf94>
Oh, exactly what I just mentioned? XD
<lf94>
That is pretty funny
<lf94>
There must be some downfall to this
<jcowan>
That won't work in JS because you keep calling and never returning.,
<jcowan>
But there's a trick to solve that: a trampoline.'
<lf94>
js has tail call opt
<jcowan>
Guaranteed?
<lf94>
As long as the function is the last statement in another...yea
DGASAU has joined #racket
<lf94>
afaik anyway
<lf94>
I think it's supposed to be pretty predictable when and when to it happens
<jcowan>
Looks like it's guaranteed only in Safari, except in simple cases.
<lf94>
function a() { ... return b(); } <- should tail call?
<jcowan>
Racket, like all Schemes, does proper tail calling, guaranteed.
<jcowan>
Anyway, here's how a trampoline works: "while (f !== null) { f = f() }"
<lf94>
Oh yeah, I have seen it explained. So far that is the most succinct version :) nice!
<jcowan>
So each state procedure, rather than calling its successor, returns its successor instead. There is more overhead, but the stack doesn't get wound up.
<lf94>
very cool
* jcowan
nods
<lf94>
Do you know off-hand the limitations using state-as-transition-functions vs a transition function are?
<lf94>
To me the obvious one is: you can swap out transition functions to construct a new machine
<lf94>
that can behave entirely different
<lf94>
With state-as-transition-functions you need to redefine both your transition function and states
<jcowan>
True
<lf94>
BUT, I see they can be very closely tied.
<jcowan>
Your transition function will just be a big case statement, however, and it probably makes more sense to set the successor state next to whatever else you do.
<jcowan>
from the human viewpoint, that is
<lf94>
Yea.
<lf94>
So I guess it boils to: how far do we want to abstract
<jcowan>
The very first Scheme interpreter used such a trampoline, as it was written in non-tail-recursive Maclisp.
<lf94>
You must be an older user :)
<lf94>
I have written...nearly zero lisp/scheme...
<lf94>
But I am extremely inspired lately.
* jcowan
chuckles
<lf94>
This is probably something you've heard 100000001 times
<jcowan>
New bloooooooood, we need new bloooooooooooood
<lf94>
I was studying lambda calculus and came to the conclusion a lisp/scheme would be worth to learn, because they are /close/ to this ultimate abstraction of computation.
<lf94>
I could probably do the same things in JS but it would not be as nice
<lf94>
i.e. applicative programming
<lf94>
(xy.x)(2) -> (y.2)
<lf94>
"applicative programming", sorry I don't know the proper term for what this actually is
<lf94>
I know it's "applicative", that's all X)
hjek has quit [Quit: Leaving.]
<lf94>
And I got into state machines because they seem to be the weakest / most strict form of computation
<lf94>
So now I'm exposed to both ends of the spectrum: super strict computation, and super abstract non-strict
<lf94>
I guess typed lambda calculus can be super strict also
<lf94>
There is probably some formal equivalence there
<lf94>
i.e. the type checker is a state machine
ziyourenxiang has quit [Ping timeout: 245 seconds]
<lf94>
All this has been a recent development in my head and I'm kind of hooked now
<lf94>
It is crazy how everything is connected
<lf94>
And a lot of it .... just boils back to lambda calculus
hjek has joined #racket
hjek has quit [Quit: Leaving.]
hjek has joined #racket
hjek has quit [Quit: Leaving.]
jao has quit [Ping timeout: 250 seconds]
lavaflow has quit [Quit: WeeChat 2.3]
lavaflow has joined #racket
buyfn has quit [Quit: buyfn]
buyfn has joined #racket
buyfn has quit [Quit: buyfn]
pierpal has quit [Quit: Poof]
pierpal has joined #racket
endformationage has joined #racket
pierpal has quit [Ping timeout: 268 seconds]
pie___ has joined #racket
pie__ has quit [Remote host closed the connection]
<NeoHamled>
If I remove the sequence->stream call in that thread, then I get an error from cycle that its contract failed, it expected sequence? but got #<sequence>
<NeoHamled>
Not sure where to go to start understanding the difference there, especially because sequence? applied to the result of (in-lines in) returns true.
kakadu has joined #racket
<lf94>
dzoe, racket is a lisp descendant, so, imo it counts :)
<lf94>
obviously really the universe is just "lambda calculus"
<kakadu>
nisstyre: I heard in the one of Yaron Minsky talks that (+) in Racket is so complex that develpers of Typed Racket decided to make impossible to observ with eyes the type of (+) in Teped Racket
<kakadu>
*Typed
<kakadu>
nisstyre: Is it actually true?
<nisstyre>
kakadu: it uses subtyping
<nisstyre>
kakadu: are you thinking of type inference?