<siraben>
remexre: was the factorial program provable?
<remexre>
Got distracted trying to package plf for nix :p didn't out much time into it
boru` has joined #forth
boru has quit [Disconnected by services]
boru` is now known as boru
actuallybatman has quit [*.net *.split]
remexre has quit [*.net *.split]
remexre has joined #forth
actuallybatman has joined #forth
<siraben>
remexre: oh nice, didn't know you also use Nix :)
<MrMobius>
KipIngram, thought it was kind of cool but didnt like the keyboard stickers
<MrMobius>
one old guy showed a picture at the conference of how he had worn the ink completely off most of his stickers
<KipIngram>
Wow. I guess I haven't used mine that much - they still look good.
<MrMobius>
the cool thing about the 34S is that an HP engineer got permission to give skeleton code to the community so they had a starting point
<KipIngram>
That is cool.
dave0 has joined #forth
LispSporks has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<KipIngram>
Ok, compiler is switching back and forth between interpret and compile, with ] and [ now.
<KipIngram>
I have it set up so when interpreting the prompt is the usual ok
<KipIngram>
When compiling it's ...
<KipIngram>
"more please..."
<KipIngram>
This works:
<KipIngram>
65 ] [ emit
<KipIngram>
But something's off with
<KipIngram>
65 ] 10 [ emit
<KipIngram>
probably have something left on the stack that shouldn't be.
<KipIngram>
Shouldn't be too hard to find.
Zarutian_HTC has quit [Remote host closed the connection]
sts-q has quit [Ping timeout: 240 seconds]
<KipIngram>
It's responding to the immediate bit properly; it executes the null even in compile mode.
<KipIngram>
Yeah - the number compile bath is leaving one item on the stack (per number "compiled")
<KipIngram>
Found it.
sts-q has joined #forth
<KipIngram>
Ok. One remaining problem - it's at least moving through "number compile" and leaving the stack right. Don't know if it's properly compiling numbers yet, but... later. The problem though is trying to compile a defined word. Gets a segfault.
proteus-guy has quit [Ping timeout: 240 seconds]
proteus-guy has joined #forth
gravicappa has joined #forth
andrei-n has joined #forth
mtsd has joined #forth
gravicappa has quit [Ping timeout: 252 seconds]
mtsd has quit [Ping timeout: 265 seconds]
actuallybatman has quit [Ping timeout: 246 seconds]
<cheater>
is there a forth compiler to cuda?
<cheater>
corth
<cheater>
fuda?
<mark4>
omg i was asking the same question earlier :)
<mark4>
looking for one
<mark4>
there does not seem to be one
<mark4>
and im not sure how viable it is, not sure if you can do a cuda stack macine
<mark4>
not asked in here tho
<cheater>
mark4: i was asking because i was working with the haskell library accelerate and it seems like it would enable something like this
<cheater>
gpu forth. gorth?
<mark4>
gforth }:)
<mark4>
ugh
<cheater>
mark4: you should check out accelerate
<mark4>
i dont coder haskell :)
<mark4>
its almost 5am and ive not slept yt
<veltas>
Sleep is for the weak
dave0 has quit [Ping timeout: 245 seconds]
dave0 has joined #forth
gravicappa has joined #forth
f-a has joined #forth
dave0 has quit [Quit: dave's not here]
<cheater>
mark4: maybe you should start. it's a great tool for building compilers and interpreters.
proteusguy has quit [Ping timeout: 252 seconds]
andrei-n has quit [Ping timeout: 240 seconds]
andrei-n has joined #forth
gravicappa has quit [Ping timeout: 260 seconds]
gravicappa has joined #forth
<siraben>
cheater: heh, haskellers lurking in the Forth channel! (am also a big fan of it for compilers and interpreters)
<cheater>
siraben: nice :)
<cheater>
i'm thinking of writing a simple language that can run on different processors at the same time
<cheater>
it comes from two backgrounds...
<cheater>
first of all, modern pcs have a cpu but also a very strong gp gpu. and even in a normal cpu, cores aren't exactly equal, due to things like locality etc. and then, now we're getting big/little combos, some cores are more powerful than others.
<cheater>
second of all, in vintage computers, you'd often have several different cpus, like a 6502 and a z80, or a 68000 and a z80, stuff like that
<cheater>
so i'm interested in figuring out how this could be intelligently managed by the programmer.
<siraben>
Ah, interesting.
<cheater>
you'd most likely want to be able to specify things explicitly, but then maybe you just don't care
<siraben>
I'd imagine something like, 4 Z80s in parallel would be quite powerful, relatively speaking.
<siraben>
Depending on the language of course, sounds like you could automatically parallelize certain blocks of code
<cheater>
i don't think they would be!
<cheater>
the sync would probably eat up all the time
<cheater>
i'm not exactly interested in uniform capabilities tbh
<cheater>
it's an explored space
<siraben>
Ah, right.
<cheater>
non-uniform is interesting to me in this regard
<cheater>
also because in reality you might end up realizing that a lot of things that are supposed to be uniform aren't
<cheater>
different power or temp throttling, different data locality, etc
<siraben>
Yeah, seems tricky.
<cheater>
i mean, is it?
<cheater>
all you really want is a language that can compile to two different targets
<cheater>
and then specify which target you want this to be in, inline
<cheater>
via a type
<siraben>
Oh, if you hand over control to the programmer, then it'd be much easier
<siraben>
I was thinking of automatic solutions
<cheater>
something like plus :: (Monad a, Target a) => a Int -> a Int -> a Int
<siraben>
looks like a tagless-final representation
<cheater>
and then you specify the a to either be TargetZ80 or Target6502 or Targetx64 or TargetCUDA or whatever
<cheater>
i never figured out what that is tbh
<cheater>
can you come up with a simple example that shows it's similar?
<siraben>
it's a bit contrived in that example, but the way you wrote `plus` means you could write sometihng like, `plus (lit 3) (lit 5)`
f-a has left #forth [#forth]
<siraben>
then when you specify the typeclasss at a call-site, it gets replaced with concrete constructors/function calls for that typeclass, e.g. it could generate Z80 or 6502 code or whatever
<siraben>
with that encoding you're also getting type checking for free, because you could encode `if_ :: m Bool → m a → m a → m a`
<cheater>
yeah that's what i mean
<siraben>
In my blog post, just by reinterpreting the assembly code in a different monad, I was able to pretty-print something that looked like C
<cheater>
does accelerate do final tagless too, in order to specify the backend? the same code you write can target the cpu or cuda.
<siraben>
What's accelerate?
<cheater>
the haskell library. look it up.
<cheater>
we were talking about it earlier here.
tech_exorcist has joined #forth
<siraben>
whoa, that's great.
<cheater>
yeah it's pretty good
<siraben>
looking at the examples, it really looks like you get parallelism for free
<cheater>
i mean
<cheater>
bear in mind the data structures start out parallel
cheater has quit [Ping timeout: 240 seconds]
cheater has joined #forth
rpcope has quit [Ping timeout: 252 seconds]
rpcope has joined #forth
cheater has quit [Ping timeout: 260 seconds]
cheater has joined #forth
logand`` has quit [Quit: ERC (IRC client for Emacs 27.1)]
actuallybatman has joined #forth
Zarutian_HTC has joined #forth
cheater has quit [Ping timeout: 260 seconds]
cheater has joined #forth
<cheater>
siraben: sorry i got disconnected, maybe you said something as i was gone
proteusguy has joined #forth
<Zarutian_HTC>
has anyone implemented a forth core like excamera j1 or such using vector instructions?
<Zarutian_HTC>
lets say we have 16 bit cell and fit x cells into a vector register
<Zarutian_HTC>
this means we could have x many 'virtual' forth cores
<siraben>
cheater: Oh I didn't, last thing you said was the data structures start out parallel
<cheater>
yep
tech_exorcist has quit [Remote host closed the connection]
tech_exorcist has joined #forth
f-a has joined #forth
<siraben>
cheater: but then what?
<siraben>
is their use serial?
gravicappa has quit [Ping timeout: 260 seconds]
f-a has quit [Ping timeout: 240 seconds]
<KipIngram>
Hey guys - my daughter's in labor! First grandson on the way!
f-a has joined #forth
<MrMobius>
KipIngram, the TI-84 daughter of the nspire one?
<veltas>
Congrats KipIngram
<MrMobius>
kidding obviously. congrats!
<cheater>
KipIngram: nice dude! well done.
<cheater>
siraben: idk man read the docs and see :)
<cheater>
not sure how to answer your question
<KipIngram>
:-)
<siraben>
cheater: Oh I thought something was going to be said after, so they just start out parallels?
<siraben>
parallel
<cheater>
yes
<cheater>
it's a matrix computation engine
<cheater>
like numpy
<siraben>
yeah
<siraben>
do you get things like fusion carried out?
<cheater>
yes
<cheater>
it's pretty good
<KipIngram>
Very elusive bug in my code. Actually having to roll up my sleeves...
<KipIngram>
AH HA!
<KipIngram>
Now it makes sense.
<KipIngram>
Whenn I compile a word, I'm not properly getting a flag onto the stack to tell the interpret loop to loop instead of executing. There's nothing there to execute, and I'm calling EXECUTE anyway.
<KipIngram>
Ok, just had left out a required DROP.
f-a has left #forth [#forth]
<KipIngram>
Annoyingly, that drop makes that the longest definition of the whole set. Grrr...
<KipIngram>
But I don't see a way to do it better.
andrei-n has quit [Quit: Leaving]
<KipIngram>
Well, shortened it a bit. A legitimate optimization, and then a bit of a "cheat"; I had a place where I was dropping a value known to be zero from the stack - I replaced the drop with a + to save three characters. ;-)
<KipIngram>
That feels a little mischievous to me, but oh well.
Zarutian_HTC has quit [Ping timeout: 240 seconds]
<KipIngram>
Of course this still isn't fully vetted. I don't know that the CFAs and literals are being compiled CORRECTLY into memory - i.e., being put in the right place the right way. I just have verified the data stack is behaving in all the right ways as I toggle STATE on and off.
<KipIngram>
Everything inside ] ... [ is being "appropriately ignored" as far as the stack goes.
<KipIngram>
I'll have to check later, when I have more tools, to see if that part's going right.
<cmtptr>
so this is maybe a weird discussion topic, but i need somebody to rationalize the traditional forth : ; syntax for me
<cmtptr>
part of the problem with every custom forth i've tried creating is that i've become attached to the idea that compiled words should be syntactically similar to anything else, so i redefine : to mean "create a new word", then you follow that with some constructor (more on this in a minute), and then ; is basically just a reveal
<cmtptr>
e.g., instead of variable x you'd use : x variable ;
<KipIngram>
Yes, I think it's important that : words "behave like" primitives.
<KipIngram>
That uniform behavior is a "characteristic" of the language.
<cmtptr>
and instead of : foo bar baz ; you'd use : foo { bar baz } does ;
<KipIngram>
Well, you're having : play the role of your CREATE.
<cmtptr>
the problem with my style is that it's really noisy
<cmtptr>
yeah
<KipIngram>
I don't see anything particularly wrong with that approach.
<cmtptr>
i like that my style is syntactically consistent and involves combining more fundamental primitives, but i dislike the noise that comes with it
<KipIngram>
As you say, it's a bit noisy. But that noise is removing "conventions" that you have to remember, like : makes words that use (:), variable makes words that use (var), etc.
<KipIngram>
You're bringing those default actions out into the light.
<KipIngram>
CREATE is really the only defining word in traditional Forth.
<KipIngram>
The others use CREATE and then bundle some "changes" into the rest of their own definition.
<cmtptr>
yeah, true
<KipIngram>
Honestly Forth could have just offered CREATE and nothing else, and left it up to the programmer to decide if they wanted to wrapper those extra functions.
<cmtptr>
i would guess history is involved - that he came up with : before he factored out create
<KipIngram>
And now : is just part of "Forth," fundamental or not.
<KipIngram>
In the very early days Forth (I don't know if it was even named, yet) didn't compile definitions. : name ...stuff... ; just stored ...stuff... in the dictionary, and if you used name it would just go replace it with the string.
<KipIngram>
And would nest that.
<KipIngram>
So it wouldn't have been very fast at all.
<KipIngram>
I'm not sure when the notion of compiled definitions got added.
dave0 has joined #forth
<dave0>
maw
<KipIngram>
Afternoon dave.
<KipIngram>
dave0
<dave0>
hey KipIngram
actuallybatman has quit [Ping timeout: 246 seconds]
<KipIngram>
Ok, I got rid of that bit using + to "disappear" a zero from the stack. I replaced that bit of code with a bit using ?dup, which I find much more "professionally appealing."
<KipIngram>
I was using a value preserving conditional return. I need the value if I take the return, but if I don't it's zero and it's clutter.
<KipIngram>
So ?dup followed by a non-value preserving conditional return fixed things up.
<KipIngram>
Now there's nothing in it that I might scratch my head over a few years from now.
tech_exorcist has quit [Quit: tech_exorcist]
Zarutian_HTC has quit [Ping timeout: 240 seconds]
wineroots has quit [Remote host closed the connection]
Zarutian_HTC has joined #forth
proteus-guy has quit [Ping timeout: 265 seconds]
<KipIngram>
Ah, I did figure ou a way to test some more things. I did this:
<KipIngram>
here ] + [ here swap - 64 + emit
<KipIngram>
and learned that compiling + advanced HERE by 4. And compiling a loop word, like 0=me, advanced it by eight, as it should. Four for the primitive and four for the offset.
<dave0>
KipIngram: i ran into an interesting bug this morning... i changed NEXT from post-increment to pre-increment, and all my variables broke (it took a while to track it down and i was surprised when i did)
proteus-guy has joined #forth
<KipIngram>
I don't immediately see the connection - you mean you pre-increment the IP?
<KipIngram>
So that most of the time the IP points to the cell that's executing now?
<dave0>
KipIngram: ah yes yes pre-increment the IP
<dave0>
KipIngram: exactly
<KipIngram>
I have IP always pointing to the next cell to be executed - not the onen I'm doing now.
<KipIngram>
When you say "broke your variables" what do you mean exactly?
<dave0>
KipIngram: yes, i had always done that without ever thinking about why
<KipIngram>
I think the main thing I like about it is that it causes the IP to point at "parameter cells."
<dave0>
KipIngram: say it was the BASE variable.. my threaded code was header BASE cell DOCOL cell doVar cell 10
<dave0>
KipIngram: doVar did a r> to get a pointer to 10
<KipIngram>
So (lit) for instance scoops up the literal value from [IP]
<dave0>
KipIngram: but the IP pushed onto the return stack still pointed to the doVar cell *not* the 10 cell
<KipIngram>
Oh - right, you are direct threaded.
<KipIngram>
I forgot momentarily.
<KipIngram>
Yep yep - I got it now.
<dave0>
KipIngram: literal is no problem because it was a primitive and i already accounted for the different NEXT
<dave0>
but using r> to get a pointer to the parameter field broke!
<KipIngram>
Right.
<KipIngram>
Understand.
<dave0>
took me a while to find that bug :-p
<KipIngram>
See in my system the IP never points anywhere near the parameter field.
<dave0>
KipIngram: so i changed back to post-incrementing IP
<KipIngram>
I have the extra level of indirection.
<dave0>
KipIngram: aahh
<KipIngram>
It poinnts to a cell that POINTS to the parameter field in my system.
<dave0>
KipIngram: so NEXT has no effect on where the parameters are?
<KipIngram>
Right.
<dave0>
cool
<KipIngram>
I have my header - my link field, name field. Then immediately after the name field, aligned 4 bytes, is the CFA field followed by the PFA field.
<dave0>
pre-increment vs. post-increment has an effect on my threaded code ... i am gonna have to think about how to fix it
<KipIngram>
Each of those is four bytes. The CFA field points to code, and the PFA field points to... well, something else.
<dave0>
KipIngram: ooh
<KipIngram>
For : definitionns the PFA field points to the definition list.
<dave0>
KipIngram: i just inlined the parameters
<KipIngram>
Yes, in traditional Forth the parameter DATA immediately follows the CFA.
<KipIngram>
But not in mine.
<KipIngram>
What I get for that is this:
<KipIngram>
: foo ...codeA...
<KipIngram>
: bar ...codeB... ;
<KipIngram>
Notice no ; on the first one.
<KipIngram>
foo will just execute right on into bar.
<dave0>
ah interesting
<KipIngram>
There's nothing "in the way" related to the header of bar.
<KipIngram>
The definition lists go in the same region of RAM as primitive code.
<KipIngram>
I could intermingle primitive code and definition lists - it's just two different kinds of "implementation."
<KipIngram>
After that second pointer in the header is the next header, immediately.
<dave0>
does it let you do interesting tricks?
<KipIngram>
Well, the one I just described.
<KipIngram>
Mainly.
<KipIngram>
But another thing it would let me do is redefine a word, point the old header to the new list, and that would change it everywhere, even in existing definitions.
<KipIngram>
I can "vector" every word in the dictionary; in most Forth's a def you can do that to is special.
<dave0>
like redefining the 'ok' prompt!
<KipIngram>
Correct.
<dave0>
ehehe
<dave0>
a nice feature!
<KipIngram>
It's just a "built in native ability" in my architecture.
<KipIngram>
Of course, I'm wasting space.
<KipIngram>
I have that extra pointer that a traditional system wouldn't have, in every definition.
<KipIngram>
On the other hand, I don't have to include that extra pointer in primitive headers.
<KipIngram>
The CFA points to the code, and that's that - I don't even give them a PFA.
<KipIngram>
I have different macros, "code" and "def" for creating the two different header styles.
<KipIngram>
Here's an example that shows some of it in action: