mark4 changed the topic of #forth to: Forth Programming | do drop >in | logged by clog at http://bit.ly/91toWN backup at http://forthworks.com/forth/irc-logs/ | If you have two (or more) stacks and speak RPN then you're welcome here! | https://github.com/mark4th
Zarutian_HTC1 is now known as Zarutian_HTC
<tabemann> hey guys
<KipIngram> Evening. Got my number conversion stuff plugged in today.
lispmacs[work] has quit [Remote host closed the connection]
<KipIngram> I rewrote it - it's even more highly factored than before.
<KipIngram> I used a little trick to stitch it into the interpreter loop, but that won't extend to compiling; I'll have to redo that loop.
<KipIngram> But it was good enough for pretty solid testing.
Kumool has quit [Ping timeout: 252 seconds]
Kumool has joined #forth
boru` has joined #forth
boru has quit [Disconnected by services]
boru` is now known as boru
APic has joined #forth
Rakko^ has joined #forth
Rakko^ is now known as Rakko
dave0 has joined #forth
<dave0> maw
<tabemann> hey
<dave0> hi tabemann
<tabemann> KipIngram: what forth are you using? for zeptoforth I added a hook to allow external number parsers and like to be added to the interpreter
<KipIngram> Oh, this is one I'm writing, in assembly with nasm.
<KipIngram> I had it pretty far along, but decided to rework and tidy it up, so I'm working through the pieces step by step.
<KipIngram> It won't be long before I can load source from disk.
<tabemann> how are you working around the fact that x86 machines are a royal pain to code for on the bare metal, when it comes to device support issues and like?
<KipIngram> Haven't really encountered any problems yet. But I haven't interfaced any devices yet either.
<KipIngram> I used system calls for the console, and will use it for disk i/o.
<KipIngram> I might port it to "real" bare metal someday - I've always figured I'll do some gadget building in retirement - this will be the software.
<tabemann> that's why I opted to go embedded to work on the real bare metal
<tabemann> embedded is far, far more friendly to embedded than PC's
<remexre> I basically did the jump to bare metal when I wanted to do "all IO is async + blocks the current coroutine, while running others"
<tabemann> *to bare metal
<remexre> and ditched amd64 for aarch64
<remexre> and yeah, aarch64 bare-metal is infinitely nicer
<KipIngram> Most of my career was embedded. It's been a while, though.
<tabemann> I've been doing all my work for more than a year on Cortex-M targets
<tabemann> my work on my own that is
<tabemann> my day job is as a web developer :(
<KipIngram> The idea is for this Forth to be easy to port to ARM.
<KipIngram> The primitives are written using a set of macros that I call portable instructions."
<KipIngram> The idea is that if I implement the pi macros in ARM, the rest will just work.
<KipIngram> We'll see how it turns out. :-)
<tabemann> I just decided to write directly for ARM from the getgo
<remexre> for running under linux, probably
<tabemann> I tried writing a portable Forth that could work under both linux and embedded, but then decided eventually that it just wasn't gonna work
<tabemann> so I decided to stop bothering with Linux and instead to specifically target ARM
<tabemann> on the bare metal
<remexre> if you haven't seen io_uring, it's a step towards making them closer, but you need a kernel from within the last ~2 years
<MrMobius> i wonder what the performance of a 64 bit x86 forth is like
<KipIngram> From what I can tell around 25-30 percent of C.
<MrMobius> with all the register renaming and other stuff, I wonder if the performance hit is a lot less
<KipIngram> Rough estimate based on a few measurements.
<tabemann> I've heard similar statistics
<MrMobius> thats amd64?
<KipIngram> I testedit on Intel.
<MrMobius> dunno if it would be much different on 32 bit but just curious
<tabemann> the biggest hit is probably all the stack interfacing
<tabemann> so it probably wouldn't make much of a difference whether it is on 32 bit or 64 bit
<remexre> yeah, I've been wondering about trying to do one where the stack is in vector registers
<MrMobius> thats one of the things I was curious about
<tabemann> probably the best way to increase performance is to find ways to intelligently put data in the stack in registers
<remexre> (and you only have 16 slots or w/e)
<KipIngram> remexre: Oh, that's interesting.
<MrMobius> I was asking in another channel about that when I read about swiftforth i think it was that ignores most registers and makes heavy use of the stack
<MrMobius> from what i heard, that might not actually slow you down like it would on embedded since x86 is so sophisticated
<KipIngram> The GForth guys did some research on caching more than just one stack item in registers. They got a small benefit, but not nearly as much as you get caching the top element.
<MrMobius> the problem with keeping things in registers is its too easy to write code that does unpredictable things to the stack which doesnt really happen in C code
<MrMobius> you have to spill all your registers with something as simple as IF 5 THEN
<tabemann> zeptoforth only caches the top element
<tabemann> the main optimizations that zeptoforth does are A) caching the top element, B) inlining, and C) merging constants into instructions when possible
<KipIngram> Mine too. Just one item.
<KipIngram> It has a pair of address registers, though, of the sort Chuck suggested alongn the way. A and B.
<KipIngram> And they support post-increment and so on.
<MrMobius> ive thought about how you would optimize if you could recognize code that doesnt unbalance the stack and can therefore skip the stack and use registers
<MrMobius> so you give the user to write in a constrained way for good performance or in a convenient way if performance doesnt matter on that line
<tabemann> well, as part of merging constants it also sometimes when it can't actually merge instructions it loads arguments into registers and then uses them there rather than putting them in the TOS register, thus avoiding pushing the old TOS onto the stack
<tabemann> MrMobius: the problem with that is that it'd require multipass compilation, which would be hard on things like embedded systems
<KipIngram> That's an interesting idea. Keep a window of several instructions and look for combinations that allow performance enhancements.
<KipIngram> I'll have to give that some thought.
<tabemann> zeptoforth only delays constants from being applied immediately
<tabemann> so it can look for something like + - * / lshift rshift arshift etc. to merge it with afterwards
<tabemann> note that it is smart enough that it can treat a CONSTANT like a numeric literal here
<KipIngram> I don't yet have any optimizations beyond caching that one register and optimizing tail calls.
<tabemann> I don't do optimizing tail calls because that might break some cases of RDROP EXIT
<KipIngram> Yeah, I can manually suppress it when I want to.
<KipIngram> I don't so much care about that one return - I do it so I can actually use deep tail recursion without worrying about the stack.
<tabemann> also, I don't really make significant use of recursion in Forth as I do in functional languages, though
<KipIngram> I do quite a bit of tail recursion. But I've never done a really deep one. I basically do most of my loops that way.
<MrMobius> tabemann, hmm ya youd have to compile to RAM temporarily and possibly edit
<MrMobius> tabemann, can it turn something like "3 lshift" into one instruction?
<tabemann> KipIngram: the problem with tail call optimization in Forth is that it's hard for the compiler to see : foo if bar else baz then ; such that it treats bar and baz as tail calls
<tabemann> MrMobius, yes
<KipIngram> Well, yeah - it takes a little doing.
<KipIngram> I've managed to make it work though.
<KipIngram> If the last thing in the word is THEN, it doesn't optimize that one.
<KipIngram> You're right that it takes care.
<KipIngram> I don't really use IF THEN much - I've gotten so I do most of my conditionals with conditional returns of one or two levels.
<KipIngram> It's an opportunity to factor.
<tabemann> zeptoforth is a much more conventional forth in that regard
sts-q has quit [Ping timeout: 240 seconds]
<KipIngram> I'll have to look at my old one to get the precise details, but basically I look at the CFA of each word I compile. If it's the : handler, I set a state variable. Under certain conditions I clear the variable. Then I just se that flag to decide whethr to optimize or not.
<KipIngram> I did require keeping thta bit of state to get it right.
sts-q has joined #forth
<tabemann> to me at least, as I see it, TCO in the general case requires caching the whole of what is being compiled in RAM, so the compiler can figure out after the fact whether something is a tail call of not
<tabemann> anyways, g'night
<KipIngram> I couldn't guarantee you there are no cass that will break mine. I just know it's never misbehaved yet.
<KipIngram> Night, man. Rest well!
<KipIngram> When I wrote the number conversion stuff for my previous Forth, I wrote it to convert floating point numbers too. I didn't actually flesh that out this time, but there's an obvious place where a bit more code would need to go to handle it.
<KipIngram> I throw an error in this one, since floats are unsupported, but at the time I do that all theinformation to complete the conversion is on the stack.
<KipIngram> It supports arbitrary base up to 62.
sts-q has quit [Quit: ]
sts-q has joined #forth
mtsd has joined #forth
gravicappa has joined #forth
jedb_ has joined #forth
jedb has quit [Ping timeout: 240 seconds]
APic has quit [Ping timeout: 252 seconds]
APic has joined #forth
andrei-n has joined #forth
Rakko has quit [Quit: Leaving]
Keshl_ has joined #forth
Keshl has quit [Read error: Connection reset by peer]
nihilazo has quit [Ping timeout: 240 seconds]
rixard has quit [Read error: Connection reset by peer]
rixard has joined #forth
neuro_sys has joined #forth
<neuro_sys> In a typical threaded implementation CREATE's default code field points to a routine that pushes the value of first parameter cell into stack right?
<neuro_sys> Just to confirm my understanding.
<KipIngram> Morning gents.
<KipIngram> neuro_sys: Yes - the address of the parameter area goes to the stack, just like VARIABLEs.
<KipIngram> That's needed for the DOES> part of CREATE / DOES> to know where to do its work.
<KipIngram> In traditional implementations the parameter area immediately follows the code field, so you can the address by just incrementing the pointer you jumped to code through.
<KipIngram> It doesn't have to be that way, though - anything that lets you find a parameter field will work.
<KipIngram> I have a pair of pointers for each word - one points to the code, one points to the parameter field.
<KipIngram> So primitives don't have that second pointer, since they don't have a parameter field that's separate from their code.
<KipIngram> In those traditional systems the code IS the parameter field.
<KipIngram> For primitives.
<neuro_sys> Ah right the address, not the value of the parameter field.
<neuro_sys> KipIngram: You earlier mentioned that the minimum number of primitives you were able to reduce was around 16?
<neuro_sys> I am still curious of the smallest possible native/host implementation of a Forth would be where everything is built with threaded code.
<neuro_sys> Few I checked had around 30+ primitives (native code).
<neuro_sys> I'm not including the inner/address interpreter to it because it's trivially small and has to be native anyway.
<dave0> there's sectorforth from a couple of days ago that has 8 primitives and 3 variables... https://github.com/cesarblum/sectorforth
<veltas> Although it misses a number of features most people would consider to be crucial for Forth
<KipIngram> neuro_sys: No, I guessed that I might be able to get down to a dozen or 15 or so. I've never really pushed for that type of optimization. I'm happy to include more primitives if it means superior performance.
<KipIngram> I have some "fat families" of primitives, like conditional returns of many different flavors, so I usually wind up with a couple hundred primitives. That's why I'm trying to add this new layer built with macros - build primitives using "portable instructions." I hope to wind up with 50-60 portable instructions.
<KipIngram> Well, I think sectorforth is meant as a bootstrapping system.
<KipIngram> You'd nearly always build the remainder of your system on top of it.
<KipIngram> I read an article once that professed to be a "three instruction Forth."
<KipIngram> I didn't really consider it a Forth, but you could use it to make a Forth.
<KipIngram> It was essentially offering peek, poke, and execute as remotely tethered instructions.
<KipIngram> Sort of a "minimal bringup" system.
<KipIngram> To my eye, a Forth system has a data stack and a return stack and is threaded. Indirect and direct threading qualify - I've always had mixed feelings about subroutine threading.
<KipIngram> But I'm not going to deny anyone's claim that they've got a Forth just because it's subroutine threaded.
<KipIngram> I don't claim any particular philosophical elegance in how I look at it.
<KipIngram> Does anyone still experiment with >1 data stack Forths? I know for a few years back in the past that was kind of a hot arena.
<KipIngram> If anyone's interested, here's the code I wrote for NUMBER:
Zarutian_HTC has quit [Read error: Connection reset by peer]
<KipIngram> I may tidy oup some of the final operations, but I'm happy with the bulk of it.
Zarutian_HTC has joined #forth
<KipIngram> No floating point support yet, but it does do arbitrary base up to 62 and offers "syntax sugar" for base 16 and base 2.
<KipIngram> For arbitrary base you say <base digits>:<number digits>, but for hex you can say x:<digits> and for binary b:<digits>.
<KipIngram> Up through base 36 upper and lower case numbers are case insensitive; starting with base 37 a-z become higher digits.
<KipIngram> This is fully debugged; as far as I can tell it completely works.
<KipIngram> The debuggingi was kind of fun - figuring out how to insntrument it with only EMIT to work with was... interesting.
<KipIngram> It has some aspects of floating point support already in place. If I removed the error trap on . and e then it would get all the way to the end and have a mantissa value and an exponent value and a decimal place count. Only the final processing of that remains to be done, and there's a definite place to tie it in.
<veltas> What is .:
<veltas> Or ;;
<KipIngram> .: is how I specify a temporary definition. It works the same as : except it marks the definition with a bit in the header.
<KipIngram> Later on when I'm through compiling with those words I'll have a word .wipe that unlinks them from the dictionary.
<KipIngram> ;; is a double return.
<veltas> Cool
<KipIngram> Yeah, I did that for the first time on my last one, and found it pretty pleasing.
<KipIngram> In that one I truly recovered all the space the names too, but I'm not doing that this time - I woundn up with too much memory management complexity in the old one.
<KipIngram> RAM is cheap - I don't need to do backflips to recover a few bytes.
<KipIngram> I'll probably have .wipe move those words to another list, so I still "have them around."
<KipIngram> Maybe.
<veltas> Do you not have wordlists?
<KipIngram> Yes, I will have that.
<KipIngram> And yes, that can accomplish the same goal.
<KipIngram> The initial setup has one vocabulary, but all the hooks are in place for more.
<KipIngram> I'll actually implement those words in Forth later.
<KipIngram> FIND already knows how to search a whole list of vocabularies.
<KipIngram> I'll have a variable PATH that heads a linked list of vocabularies to search.
<veltas> I don't know why you'd want to hide many of those words
<veltas> Like 0-9? for instance
<KipIngram> Sure - I might change my mind on some of them.
<KipIngram> Particularly the flag management words right at the beginning - they seem useful.
<KipIngram> I debated about them.
<KipIngram> But I decided because they have a stack frame variable hard coded into them that I'd go ahead and hide them for now.
<KipIngram> That makes them slightly less generic.
<KipIngram> They'd be reusable only if I use the frame variable s1 for flags.
<KipIngram> Which I may decide to accept as a "convention."
<KipIngram> Might be better to change it to s0 if I'm going to do that.
<KipIngram> Those items that are loaded onto the stack in the word "frame." The { makes them accessible as s4, s3, s2, s1, s0.
<KipIngram> Regardless of what else I've done to the stack.
<KipIngram> The 4 } drops them.
<KipIngram> Well, drops four of them. The first one is the result slot.
<veltas> How does number refer to pre/digits/post if they haven't been defined yet?
<KipIngram> Right. When this becomes real Forth source I'll have to re-order.
<KipIngram> Nasm can find the symbols.
<veltas> Oh righty
<KipIngram> So order is irrelevant at the point.
<veltas> Do you support double numbers?
<KipIngram> No; I opted for floating point instead.
<KipIngram> I figure with 64 bit to start with, double isn't horribly compelling.
<KipIngram> I could tweek it a little and do that, though - it counts digits to the right of the decimal point.
Zarutian_HTC has quit [Remote host closed the connection]
<veltas> It's not it's just a compatibility thing now
<KipIngram> Except all the math is 64 bit at the moment, so that would take a little work.
<veltas> But it is useful for writing code that will run on 64-bit as well as 16/32-bit
<KipIngram> Not too much though - that would just be in absorb.
<KipIngram> True.
<KipIngram> I thought about it. I could do it all if I required floats to have an exponent specified.
<veltas> Also calculations with more than one operation are more accurate if they're done with 128-bit intermediates
<KipIngram> One of my goals is scientific computing, so floating point was of real interest to me.
<veltas> I do consider floating point necessary for any desktop forth
<KipIngram> I understand.
<KipIngram> I could still have primitives that maintained 128-bit format internally.
<veltas> The first proper programming language supported floating point, in 1957
<KipIngram> It was just a decision.
<KipIngram> It would be a not-so-hard step to change it.
<KipIngram> I figure I'll add the rest of floating point after I have the system further along and can debug interactively more effectively.
<KipIngram> I'll rip out the error and have it just leave the whole stack frame there, and I can inspect it and make sure it's got the exponent and the decimal count right and so on.
<KipIngram> Then use the fpu to make it a properly formatted float.
<KipIngram> I've got all that code in my other one, so I can use it to refresh my memory.
<KipIngram> What are your thoughts on where floats should reside?
<KipIngram> Would you leave a float on the main data stack? Or on the FPU stack?
<veltas> FPU stack
<KipIngram> That's what I did on my old one. But never worked with it enough to learn whether that was the right decision or not.
<KipIngram> Since NUMBER will need the FPU to do that work, I think I may save its state off and restore it when I'm done.
f-a has joined #forth
<KipIngram> I'd like to have the whole stack for "real" work.
<KipIngram> Although I managed to do a hell of a lot with my old HP calculator that only had a 4-deep stack.
<veltas> You can't just keep adding stacks to make things easier, but with the weird size/alignment concerns if you have a Forth that supports different size arch's it makes far more sense to keep them separate
<veltas> FPU stack doesn't need to be deep
<KipIngram> Right. Even 4 is very useful.
<veltas> If most arch's have a set of separate reg's for FPU work then we might as well have a separate stack
<veltas> Could have a float TOS register too
<KipIngram> I'm interested in a degree of portability, but I don't obsess over the full gamut of architectures out there.
<KipIngram> I'm primarily trying to make sure I'm arranging things so an ARM implemenntation will be good.
<KipIngram> Beyond that, I haven't given it a lot of thought.
<KipIngram> On the one I did before this I *wanted* that as well (good portability to ARM), but I wound up doing things that made it not turn out so well.
<KipIngram> I really didn't know enough about ARM at the time.
<KipIngram> This time I've kept its RISC nature in mind better.
nihilazo has joined #forth
<KipIngram> I've used macros so that I can exploit the more powerful addressing of the x64 while still having a good pathway to ARM available.
<veltas> I think your code could do with stack comments at the very least
<KipIngram> I plan to have a shadow screen later.
<KipIngram> I'll put those in then.
<KipIngram> I agree with you,
<KipIngram> but I had the code thoroughly in mind and I find it easier to work with without comments spreading it out.
<KipIngram> I've tried inline comments, stack effect comments, etc. - somehow they just grate on me. Make the code seem uglier. So shadow screens offer a reasonable compromise.
<KipIngram> The editor I wrote for the last one could show me two side-by-side blocks anyway, so that part will work well.
<KipIngram> I'm using 4k blocks these days, so 64 64 character lines.
<KipIngram> Looks nice on the screen. :-)
<KipIngram> You can get quite a lot of code in a 4k block (if you don't use up a lot of the space for comments). That editor fit in one block, and offered full screen navigation with vim-style cursor control and some other nice features.
<veltas> There is sort of an inverse relationship between readable forth code and 'beautiful' forth code
<veltas> I won't sacrifice maintainability/readability for beauty
<nihilazo> I want things to be readable even if it sacrifices the code looking nicer
<KipIngram> I know. I'm not really writing this code for anyone else, though, so as long as I can rememberh how it works I'm fine. I think shadow screens will get me there.
<KipIngram> Early in my career I held positions where I had a good bit of authority - engineering executive type stuff. And I functioned sort as the company's "master architect." The owner would drop architecture stuff on me, and of course he had to get what he wanted. But he left the bulk of it to me. Those "glory days" are over and these days I'm sort of a small player in a huge corporation. This Forth is my
<KipIngram> sandbox to play in exactly how I please. :-)
<KipIngram> I absolutely agree that if my intent was for a team to work with this code it would need comments.
<KipIngram> Another thingn I've thought about doing someday is implementing a "comments database." Comments that can be as large or as small as I like, that are "pinned" to specific places in my codebase. An editor keystroke would open the comment associated with any given spot in a separate window.
f-a has left #forth [#forth]
<KipIngram> I conceived that idea as a way of coping with my distaste for inline comments.
<KipIngram> Trust me, though - my *hope* for this system is that I use it for decades as the primary software I put on "gadgets" that I build. I don't want to look at some part of it years down the road and have no idea what it's doing. So it needs somethingn. The shadow screens at the very least.
<KipIngram> I really like the comments database idea, because then I can just go hog wild and put in very detailed explanations of what's going on.
<KipIngram> The resons for the decisions I made, etc.
<KipIngram> And it will let me do it without making the code look unpleasant to me.
<KipIngram> I'd probably have the editor be able to render the characters at comment locations differently, so I can see where the comments are. But that will be something I can turn on and off.
jedb__ has joined #forth
<KipIngram> And if I do that I'll want it to be smart enough to adjust the row and columnn positions of the comments when I make changes to the block.
<KipIngram> So there's a bit of work there to get something like that working well.
<KipIngram> But I don't wrap at the end of lines, so it won't be *too* intractable.
jedb_ has quit [Ping timeout: 268 seconds]
<KipIngram> It would also be possible to arrange for some comments to qualify as "inline" comments that I could toggle on and off in the editor.
<KipIngram> That would make me feel *totally* different about commenting.
dave0 has quit [Quit: dave's not here]
tech_exorcist has joined #forth
<KipIngram> Oh, veltas - here's something I'd like your opinion on.
<KipIngram> 1e6
<KipIngram> int or float?
<veltas> According to standard that's a float
<veltas> But if a forth made that an int I wouldn't mind
<veltas> I've initialised stuff like that before in C, where you can convert from a float constant to an int implicitly
<veltas> KipIngram: Hey look side-by-side comments and code http://gitlet.maryrosecook.com/docs/gitlet.html
<veltas> Reminded me of what you said about the shadow screen
<veltas> Actually I would mind because 1E is shorter than 1.0E or 1.E
<veltas> But if 1. is a float then it's fine
f-a has joined #forth
f-a has quit [Quit: leaving]
<veltas> siraben: I'm thinking about trying out Lisp again, what Lisp do you recommend?
<siraben> veltas: I'm partial to Scheme, which probably fits the Forther mentality well (roll your own, more minimal than Common Lisp, say)
<siraben> But Common Lisp is also a common choice, and more recently in the last 10 years, Rackey
<siraben> Racket*
<siraben> I guess it also depends one what you want to do/if you want to implement a Lisp as well
Kumool has quit [Ping timeout: 252 seconds]
Kumool has joined #forth
f-a has joined #forth
<veltas> Not sure just window shopping, but thanks siraben that is a useful summary for me
<KipIngram> I've never played with Lisp, but just looking at it from the outside I've had the impression that it has a very interesting and elegant engine under the hood.
<siraben> KipIngram: depends on the specific Lisp of course, but generally yes
<siraben> the functions of Lisp run all the way to Alzono Church's lambda calculus in the 1930s
<siraben> foundations*
<siraben> veltas: there's also the venerable SICP textbook (and excellent 1986 lecture series on youtube), but that doesn't really teach Scheme, instead using it as a vehicle to teach concepts re: computation, compilers, interpreters, streams, message passing, pattern matching etc.
<siraben> It's funny when they mention other programming languages which mostly are obsolete now (ALGOL, PL/I, COBOL)
<siraben> KipIngram: In fact for some time the original LISP didn't even have a compiler or interpreter, McCarthy regarded it as a formal system for computation
<KipIngram> I think I knew that. Then someone... had a bright idea. :-)
<KipIngram> Chuck wrote a book way long ago, in the early days of pre-Forth, that's eye opening regarding it's heritage.
<KipIngram> It didn't start out with : definitions.
<KipIngram> It operated by text substitution.
<KipIngram> Word names were just macros that triggered the insertion of the appropriate text.
f-a has left #forth [#forth]
<siraben> heh, computation is just substitution all the way, λ-calculus also proceeds by plain substitution (provided one renames under binders correctly)
<siraben> KipIngram: which book did Chuck write?
<crc> Programming a Problem Oriented Language
<KipIngram> Yes - that one.
<KipIngram> It's an informative read.
<KipIngram> You can see the "seeds of what was to come."
<KipIngram> And he talks about a number of techniques that faded into obscurity and never made it into what was ultimately Forth.
<KipIngram> Touches on what would have to be considered very primitive file systems, built on top of the disk blocks.
tech_exorcist has quit [Remote host closed the connection]
tech_exorcist has joined #forth
proteusguy has quit [Remote host closed the connection]
f-a_ has joined #forth
f-a_ has quit [Quit: leaving]
f-a has joined #forth
neuro_sys has quit [Remote host closed the connection]
neuro_sys has joined #forth
neuro_sys has quit [Changing host]
neuro_sys has joined #forth
tech_exorcist has quit [Remote host closed the connection]
tech_exorcist has joined #forth
<neuro_sys> I dabbled a bit in Lisp, ended up mostly writing Emacs Lisp. I am far from knowledgable about the depths of Common Lisp. But my first read was Lisp 1.5 Primer by Weissman, as it is very simplistic and builds from the primitives of Lisp (basically a cons tree), which, I guess the modern/later Lisps tend to try to hide away.
<neuro_sys> It seemed insanely inefficient, but the simplicity/elegance of building a computational system from "cons pairs" and its implications in terms of treating code as data was a pretty exciting realization.
<neuro_sys> Later when I looked into Common Lisp, I tried out SBCL implementation, which produces pretty efficient machine code, but otherwise the amount of symbols/packages in Common Lisp made me lose interest quickly.
<KipIngram> That sounds a bit like how I reacted to LaTeX. When I first sat down to learn it, I thought I was about to learn a "language." But what it really is is a massive toolbox of widgets to achieve various typesetting goals. And there's not really any sort of clean "architectural synthesis." No "principle" you can grasp that makes a large part of the thing apparent to you. You just have to know the tools in the
<KipIngram> toolbox. That type of learning is incredibly unsatisfying to me.
<KipIngram> Though I highly value what good LaTeX chops allows one to do.
<f-a> +1
<KipIngram> Part of the problem is that that toolbox probably has literally thousands of authors.
<KipIngram> neuro_sys: I'd probably enjoy exactly the same aspects of Lisp that you did.
lispmacs[work] has joined #forth
mark4 has quit [Remote host closed the connection]
mark4 has joined #forth
<KipIngram> I just found that Lisp 1.5 Primer here, for anyone who's interested:
<KipIngram> That is a very antique-looking document...
<lispmacs[work]> "LISP is not an easy language to learn because of the functional syntax and insidious parenthetical format"
<lispmacs[work]> page vii
<lispmacs[work]> hehe
<lispmacs[work]> lisp is evil
<lispmacs[work]> i use scheme for my desktop projects. but I've not found it to be practical for small microcontrollers
f-a has left #forth [#forth]
<lispmacs[work]> there is microscheme but it is non-interactive development
gravicappa has quit [Ping timeout: 240 seconds]
gravicappa has joined #forth
<KipIngram> I do have to admit those parentheses are off-putting to me.
<KipIngram> But I still suspect there's something underneath that I'd find quite lovely.
<KipIngram> Haha - I went to get coffee, read what you quoted above, and then went back to the doc and read the same thing again. I'm just right on your heels.
<lispmacs[work]> I think the main touted advantages of lisp are (1) ease of interactive development, and (2) the homoiconicity, i.e., code and data are the same structure, making it easier to add new syntax
<lispmacs[work]> but it is problematic in a very resource-limited MC environment, because (1) the challenges of dealing with garbage collection and (2) the challenges of translating the pure lisp ideals into lightwieght compiled code. Also, as Chuck pointed out, lisp doesn't really standardize anything on the level of I/O and memory access
<lispmacs[work]> I played around with a lisp that is loadable through the Arduino IDE. It was token based
<lispmacs[work]> no, wait, I don't think it was token based
<lispmacs[work]> uLisp is what I was thinking of
<lispmacs[work]> with 328P MC, you could fill up your ram with about a dozen procedures
andrei_n has joined #forth
andrei-n has quit [Read error: Connection reset by peer]
mtsd has quit [Quit: Leaving]
mtsd has joined #forth
neuro_sys has quit [Remote host closed the connection]
neuro_sys has joined #forth
neuro_sys has quit [Changing host]
neuro_sys has joined #forth
<neuro_sys> I think I was disconnected.
<neuro_sys> KipIngram: Once you get to the part (earlier in the book) where how a "symbolic expression" is formed and implemented, you will possibly understand the implications in the terms that data and code are the same in Lisp, which opens up things like meta-programming (macro expansions) and runtime code generation (not that I've seen or used this aspect at all). The book is very old, but so clutter-free and the audience is computer-savy. I
<neuro_sys> like that older books jump straight to the point in general.
<neuro_sys> But I believe Common Lisp has a world of cool stuff beyond the simple formulation of the language presented in 1.5.
<KipIngram> It seems like a good starting point.
<KipIngram> Get the key core aspects first.
Zarutian_HTC has joined #forth
<veltas> siraben: What is a good book for learning Lisp with Scheme (I already learned a bit of elisp but don't mind a slower or faster pace book, I can just read faster)
<Zarutian_HTC> veltas: Structure and Interpretation of Computer Programs
<Zarutian_HTC> a book I always recommend regarding Scheme
<veltas> Thanks but I'm looking for a book that's more of a tutorial for Scheme and not as CS101
<neuro_sys> I was planning to read R6RS?
<neuro_sys> At least it's on my reading list.
<veltas> And which implementation should I use, MIT/GNU Scheme, GNU Guile?
<lispmacs[work]> veltas: an argument for GNU Guile is you have a pretty large support base for it at #guile and at #guix, since GNU Guile is the basis for the Guix Gnu/Linux operating system
<lispmacs[work]> Gnu Guile is fairly easy to embed in C programs which is something you eventually want to do
<veltas> Yep
f-a has joined #forth
gravicappa has quit [Ping timeout: 252 seconds]
andrei_n has quit [Quit: Leaving]
chrisb has quit [Quit: leaving]
X-Scale has quit [Ping timeout: 252 seconds]
X-Scale has joined #forth
mtsd has quit [Ping timeout: 265 seconds]
Zarutian_HTC has quit [Read error: Connection reset by peer]
Zarutian_HTC has joined #forth
tech_exorcist has quit [Ping timeout: 265 seconds]
Zarutian_HTC has quit [Ping timeout: 252 seconds]
dave0 has joined #forth
<dave0> maw
mark4 has quit [Remote host closed the connection]
neuro_sys has quit [Remote host closed the connection]
f-a has quit [Quit: leaving]
jedb__ is now known as jedb
Zarutian_HTC has joined #forth