<veltas> Might call it set>msb or something like that (set up to most sig bit)
<veltas> I don't think I'd let that be a word though, unless it fell out during refactoring
<cmtptr> it's part of a larger algo
<cmtptr> so yeah it felt like a natural part to factor
<cmtptr> it's really rounding up to power of 2 -1
astrid has joined #forth
TangentDelta has joined #forth
Gromboli has joined #forth
<TangentDelta> Hello
<veltas> Hello
<cmtptr> Hello
<TangentDelta> Working on anything fun and exciting?
<TangentDelta> I've been digging into this HP IDACOM PT-500 protocol tester. It runs a modified flavor of FIG-FORTH
<TangentDelta> It has 6 Motorola 68000's in it that all run FORTH concurrently. You can move the screen+keyboard to any of them, and send messages between them.
Zarutian_HTC has quit [Remote host closed the connection]
<tabemann_> back
<MrMobius> TangentDelta, that sounds cool
<MrMobius> are you using it as a protocol tester or just having fun?
<TangentDelta> Just having fun with it at the moment.
<TangentDelta> I was using it as a protocol tester in my telecom lab, using it to capture stuff on a T1.
<TangentDelta> It's sad how little attention FORTH gets these days.
<TangentDelta> I work for a friend that does all kinds of cool things with old computer hardware/software.
<TangentDelta> He designed, and sells, a single-board computer around the Rockwell R6501Q. It didn't get much attention at first (mostly due to a lack of software).
<TangentDelta> We found ROMs for Rockwell's RSC-FORTH for the R6501Q and got it going on the single board.
<MrMobius> nice!
<MrMobius> did you guys post about that recently on one of the forums? I saw something like that
<TangentDelta> and I got TinyBASIC working too, for those heathens :P
<TangentDelta> Yeah. He made a post about it on the 6502 forum and the the VCFed forum.
<TangentDelta> TinyBASIC uses an intermediate language. I wonder if a virtual machine for that language could be written for FORTH...
<TangentDelta> *and then the
<MrMobius> just for fun? seems like that would be slower
<TangentDelta> Just for fun :P
<MrMobius> haha then the answer must be yes
<TangentDelta> TinyBASIC is really slow regardless
<MrMobius> iirc part of tinybasic is written in that intermediate language
<TangentDelta> A large portion of it is
<MrMobius> which is already a forth-like approach :P
<TangentDelta> Lol, I was thinking about extending it so that you could have TinyBASIC extend itself, but then you'd end up with a poor-man's FORTH.
<MrMobius> heh
<MrMobius> I thought about writing a 6502 simulator in a 6502 forth then running basic on that
<TangentDelta> Hahaha
<MrMobius> and if it's fast enough making another 6502 simulator in that basic to simulate something else
<MrMobius> where the innermost layer is just blinking an LED
<MrMobius> if you do enough layers, you could get the blink rate down to once a second just through simulator overhead :P
<TangentDelta> Someone wrote a Lisp for FORTH, and I found a FORTH written in Lisp.
<MrMobius> i cant even wrap my brain around that
<TangentDelta> Hahaha
<MrMobius> where would you stick all those list values in your forth? hopefully not on your stack
<TangentDelta> Probably in an ALLOT array.
<MrMobius> out of memory crash in 3 2 1...
<TangentDelta> Oh it uses the heap
<TangentDelta> "allocate" is like C's malloc
<MrMobius> oh nice
boru` has joined #forth
boru has quit [Disconnected by services]
boru` is now known as boru
<proteusguy> cmtptr, fillr is what I would call it, filling in the bits to the right of the msb.
<proteusguy> Or even just fill if there's never a use case for filling to the left or any other word that would be a complement to fillr.
<siraben> proteusguy: how's ActorForth going?
<cmtptr> proteusguy, thanks that's a good one. i think i had called it fill at one point but i felt like it was too ambiguous. i think i like fillr
<cmtptr> fillr up
<proteusguy> :-) there you have it
<proteusguy> siraben, paused since my Mom passed away. Been thinking on it again. I'm starting to consider implementing it with C++ rather than Python. Actually spent part of last night setting up a modern C++ 20 dev environment. Also getting a build system setup. Seems like CMake with Ninja is what's popular now.
<siraben> Right, I've seen those used in conjunction a lot. Personally I use Nix as soon as a project requires other libraries and dependencies.
<siraben> Why switch from Python to C++? Is it a matter of performance?
<proteusguy> siraben, performance will definitely improve but the real driver is that Python doesn't give me low enough level access to do some of the simple stuff. It makes it difficult to escape out of its object model. What I had to do to get branching and looping "working" was insane.
<siraben> That makes sense.
<proteusguy> Also the latest C++ standard also has continuations in it which seem promising.
<siraben> Full continuations? What's the equivalent of call/cc?
<siraben> Heh, C already kinda has continuations via setjmp/longjmp
<proteusguy> Yeah but it's not "safe". Naturally the C++ ones are.
<siraben> proteusguy: have you heard of Zig?
<proteusguy> don't think so
<siraben> Zig looks like a viable C alternative, while languages like C++ and Rust are expressive and performant they are very complex
<nmz> and take too long to compile
hosewiejacke has joined #forth
<proteusguy> Zig looks interesting. It says it's also a C compiler - is it also a C++ compiler as well?
<proteusguy> siraben, it "sorta" does. Funny it's written in C++ but doesn't integrate with it very well.
fiddlerwoaroof has quit [Quit: Gone.]
fiddlerwoaroof has joined #forth
fiddlerwoaroof has quit [Quit: Gone.]
fiddlerwoaroof has joined #forth
fiddlerwoaroof has quit [Client Quit]
hosewiejacke has quit [Ping timeout: 258 seconds]
hosewiejacke has joined #forth
fiddlerwoaroof has joined #forth
<tabemann_> hey guys
Gromboli has quit [Quit: Leaving]
<proteusguy> tabemann_, hey!
dave0 has joined #forth
* tabemann_ just wrote something that should solve most of his dynamic allocation needs - a generic memory pool
<tabemann_> simple, fast, and constant-time
<MrMobius> and the main downside I think is potentially losing up to half of your usable memory
<tabemann_> no, it's not a heap
<tabemann_> it's really just a free list for constant-sized blocks
<tabemann_> unlike a heap, there is no way memory can get "lost"
<MrMobius> and 16 is your smallest block?
<tabemann_> no, 4 bytes
<MrMobius> and 8 bytes is the next smallest?
<tabemann_> 16 bytes is my heap implementation's minimum size
<MrMobius> *largest
<MrMobius> so how many 17 byte blocks can I allocate?
<tabemann_> my memory pool has a minimum size of 4 bytes because it needs to put the free pointer somewhere
<proteusguy> seems reasonable.
<tabemann_> the maximum size is how much free RAM there is on the machine
<tabemann_> one plus of the memory pool implementation is that one does not need to devote all the RAM one needs to it ahead of time
<tabemann_> it allows adding more RAM to it after the fact
<tabemann_> currently it doesn't support removing RAM from it, though, after the RAM has been added
sts-q has quit [Ping timeout: 272 seconds]
<MrMobius> I think actually thats the main drawback of memory pools over garbage collection - you lose up to half of your ram
<MrMobius> if you have a pool of 16 byte blocks and a pool of 32 byte blocks and you try to allocate 17 byte blocks then they will all be taken from 32 byte pool
<MrMobius> and 15 of the 32 bytes will be unused in every block
<MrMobius> which would not happen if it were garbage collected
<tabemann_> this is supposed to be a soft real-time system
<tabemann_> GC would be insane for that
<tabemann_> there's a reason why I left my heap allocator out of my standard images
<tabemann_> big, expensive resource-wise, slow, not real-time at all
<MrMobius> I know im just saying thats the main trade off as far as I can tell
<MrMobius> constant time but less usable memory
<tabemann_> I think your hypothetical problem only arises when the programmer makes poor choices as to what block sizes to use for their memory pools
sts-q has joined #forth
<tabemann_> if they are going to have many 17-byte blocks allocated, then make a pool of 17-byte blocks instead of a pool of 16-byte blocks
<MrMobius> tabemann_, ya if you know that in advance
<MrMobius> but if you have a random chance of needing to allocate 1-32 bytes since the size might depend on runtime input then you're losing a good chunk of your memory there too
<tabemann_> for that what I'd do is determine the distribution of sizes needed, and create an array of memory pools based upon that - then dynamically select which of the memory pools one needs at runtime
<MrMobius> so if there is an equal chance of the buffer being length 1-32 you would create 32 pools?
fiddlerwoaroof has quit [Quit: Gone.]
<tabemann_> know what
<tabemann_> this particular use case probably is better served by a hybrid approach
hosewiejacke has quit [Ping timeout: 258 seconds]
<tabemann_> use a heap combined with multiple memory pools
<tabemann_> e.g. have 4, 8, 12, 16, 20, 24, 28, and 32 byte memory pools
<tabemann_> but don't allocate all the space needed for them ahead of time
<tabemann_> rather allocate minimal amounts in a heap
<tabemann_> and when those run out of space, allocate more space from the heap
fiddlerwoaroof has joined #forth
<MrMobius> seems like youd lose your constant time then if you need to allocate more heap space
<tabemann_> and add it to the memory pools in question
<tabemann_> yes, you would; it's not an ideal case, but if you're more concerned with space wasted than with realtime characteristics, it's probably the way to go
<tabemann_> so you have a tradeoff
<tabemann_> you can have really nice realtime characteristics, at the cost of the possibility of wasting RAM
<tabemann_> or you can have less realtime-friendly characteristics, with the plus that one can use RAM more efficiently
dave0 has quit [Quit: dave's not here]
gravicappa has joined #forth
ecraven has quit [Quit: bye]
ecraven has joined #forth
gravicappa has quit [Ping timeout: 258 seconds]
hosewiejacke has joined #forth
<siraben> concatenative, polymorphic programming in Haskell; https://github.com/leonidas/codeblog/blob/master/2012/2012-02-17-concatenative-haskell.md
<siraben> treating a stack as a heterogenous tuple, it's possible to encode push, dup, swap, nip, etc. and compose them in the usual way
<inode> TangentDelta: ever seen a R6501Q or similar 6502 derivatives used in dial-up modems?
hosewiejacke has left #forth ["Leaving"]
iyzsong has quit [Quit: ZNC 1.7.5 - https://znc.in]
marksmith has joined #forth
marksmith has quit [Ping timeout: 264 seconds]
iyzsong has joined #forth
marksmith has joined #forth
<veltas> siraben: nice
<veltas> I struggle to motivate myself to apply complicated language features to a language that's designed to be inherently simple in construction
<veltas> But I get that in e.g. Haskell it's nice to provide concatenative features and harness the type system there to do it (not least because you *must* use the type system?)
marksmith has quit [Ping timeout: 272 seconds]
marksmith has joined #forth
<siraben> You could also embed a dynamically typed language in Haskell, by making a new datatype V which has all the possible values, data V = I Int | B Bool | F (V → V) | L [V] | TypeError, then define something like
<siraben> double :: V → V
<siraben> double (I n) = I (n + n)
<siraben> double _ = TypeError
<siraben> definitely because of the type system these things can be done, so it's up to the user on how strongly typed they want their DSL to be
gravicappa has joined #forth
<siraben> I might toy around with a statically typed embedding of Forth in Haskell, so that something like TRUE 1 + would be unrepresentable, then generate Forth code
<TangentDelta> inode: Possibly. I have an ISA modem card with a large amount of Rockwell ICs on it. I haven't bothered to look any of them up in the data book though.
<TangentDelta> I have some Protel COCOT payphones that use a processor with a 6502 core, designed to be used in a payphone.
<TangentDelta> Not a Rockwell part though.
<MrMobius> rockwell did some really interesting things later too like slightly incompatible 35mhz smd 6502s with added thread pointer
<MrMobius> i think for modems
<MrMobius> might not be that model but some had a mode where X specifies which zero page byte is accumulator
<MrMobius> both if those are big speed ups for forth
<veltas> siraben: Yes a library for doing forth-style code in Haskell that compiled to something like standard forth would be cool
<veltas> With type safety
<veltas> Well I suppose it depends how flexible/approachable it is, I'm sure it can be done, just whether it would really be of use to Forthers is the question
Gromboli has joined #forth
<veltas> This discussion of concatenative programming from the functional perspective is quite interesting, would be nice if that got traction one day
<siraben> I like to think that FP combines the concatenative and algebraic styles together
<siraben> So you can write foo x y z = x + y * z just as easily as main = interact $ prettyPrint . solve . parse
<veltas> Under "Point-free Expressions" they compare point-free forms and concatenative, as if they are different
<veltas> Of course I know FP is capable of representing things concatenatively anyway
<veltas> Well any language can really
<siraben> Anyone here read Algebra of Programming? The book uses a ton of pointfree FP which especially helps with algebra reasoning
<siraben> s/algebra/algebraic
<veltas> I remember getting the same 'chills' learning Forth that I got when seeing point-free programming in Haskell the first time
<veltas> I like factoring, what can I say
<siraben> Have you heard of hlint? It's a linter that often suggests to rewrite things in pointfree form, e.g. (\x → f (g x)) becomes (f . g)
<veltas> Nope, I don't write Haskell though
<veltas> I wrote Haskell years ago for some course at uni, I thought it was really nice but just lost interest.
<veltas> I don't have a negative opinion, and I think it would be a good thing to learn, but my interest has gone in other directions
<siraben> Of course, only so much time to do things
<veltas> siraben would you say that retro was a functional language, a procedural language, or neither?
<siraben> I haven't used retro, but from what someone posted here recently, it looks at least procedural
<siraben> Does it have higher-order functions?
<veltas> It uses quotations a huge amount
<veltas> And functions that manipulate the xt's from those quotations and other things on the stack
<siraben> Hm, so I'm guessing that allows things like functions to be passed and applied
<veltas> Yes
<veltas> I want to say it's not a FP, but I think I can't give a good generic answer for why that wouldn't also reject Lisp
<siraben> Does it have closures?
<veltas> No but neither do most lisps (if I remember rightly?)
<veltas> Or does it crc?
<siraben> IIRC McCarthy's original LISP-2 used dynamic scope and thus didn't have closures
<siraben> Usually Lisps these days have closures and lexical scope to varying degrees
<siraben> funnily enough, I wonder how much closures are even needed in pointfree style of programming
<veltas> Hmmm
<veltas> I used to think there was a clear line between FP and procedural, but I realise that might not be true, or maybe I don't understand the definition at all
<crc> As a forth, retro is procedural. But it does drow some influence from functional programming
<veltas> I think the main point is some languages are closer to the lambda calculus heritage, and some closer to the turing machine heritage.
<crc> No closures
<veltas> Yes crc, although by that definition lisp is procedural. But it's a reasonable definition for current languages with how FP has evolved over time
<siraben> retro sounds reflective, which overlaps with FP a bit
<siraben> Well, that'd make C functional since it has function pointers, heh
<veltas> Exactly
<siraben> another good measure is how amenable is the language to equational reasoning. can you replace a word by its body and expect the same behavior?
<siraben> and likewise, replace expressions with equivalent ones and expect the same behavior
<veltas> It is a bit subjective, especially with earlier languages which kind of fail even modern definitions of a "high level language" most days
<siraben> There was a great paper I came across on pinning down what is really meant by "expressiveness" in programming languages
<siraben> after thinking about it more I'd say Retros is procedural but not functional for the same reason C isn't functional, closures are pretty key to FP IMO
<siraben> s/Retros/Retro
<veltas> I personally think C++ and Lua are more procedural than retro, but they both have closures.
<veltas> I was reading something about Turing suggesting to somebody they should write a program on their early computer that
<veltas> let them run a program emulated, so they could debug it in detail and 'trace' what it does for study
<veltas> And I thought it was quite funny because he was kind of like the real computer analogue of a universal turing machine
<siraben> Hah, nice. Turing was very forward-thinking.
gravicappa has quit [Ping timeout: 240 seconds]
gravicappa has joined #forth
lispmacs has quit [Read error: Connection reset by peer]
lispmacs has joined #forth
catern has quit [Ping timeout: 240 seconds]
ornxka has quit [Ping timeout: 240 seconds]
catern has joined #forth
ornxka has joined #forth
X-Scale` has joined #forth
X-Scale has quit [Ping timeout: 264 seconds]
X-Scale` is now known as X-Scale
gravicappa has quit [Ping timeout: 256 seconds]
Zarutian_HTC has joined #forth
<tabemann_> hey guys
<Zarutian_HTC> h'lo
* tabemann_ is weighing whether a hyrbrid heap-memory pool solution is too expensive in flash needed
<tabemann_> i.e. memory exists in a heap, out of which space is allocated by a memory pool as needed
<Zarutian_HTC> been thinking about using something like Infocoms ZCSII to encode word names
Vedran has quit [Ping timeout: 256 seconds]
<tabemann_> in some old Forths they only used the length of a word and the first three characters
Vedran has joined #forth
<Zarutian_HTC> that doesnt work with other languages than english
<Zarutian_HTC> plus I want SEE to work properly
<tabemann_> well yes
<tabemann_> I wasn't recommending the approach
Zarutian_HTC has quit [Remote host closed the connection]
zolk3ri has joined #forth
zolk3ri has quit [Remote host closed the connection]
Lord_Nightmare has quit [Remote host closed the connection]
Lord_Nightmare has joined #forth
Zarutian_HTC has joined #forth
Zarutian_HTC has quit [Remote host closed the connection]
<veltas> tabemann_: what are your constraints?
<veltas> And is concern here about using too much space by the words needed, what are the constraints there?
<tabemann_> flash usage isn't too much of a concern, as the chips I'm using have about 512K to 1M of flash on average
<tabemann_> even still, a fully loaded system will have a lot of extra non-standard functionality, which will add up
<tabemann_> ("non-standard" as it's not included in the official binaries, and has to be loaded by the user)
<tabemann_> other constraints are realtime performance - in the common case it will be fast, not much slower than plain memory pools
<tabemann_> but in the worst case it will have the behavior of a heap, because it will need to allocate more memory to add to the memory pools
<tabemann_> another constraint is memory usage efficiency
<tabemann_> in this regard it aims to have better performance than plain fixed-size memory pools
<tabemann_> because it is internally an array of separately-expanded memory pools for a range of sizes
<tabemann_> each of which are expanded as needed with memory taken from the heap
<veltas> "better performance than plain fixed-size memory pools" is this because the data could be larger so would be some kind of list?
<tabemann_> but which can have minimum allocation sizes smaller than those permitted than the heap (which has a minimum allocation size of 16 bytes)
<tabemann_> veltas: it's because it is an array of memory pools, and it automatically selects the smallest pool necessary, and expands it as necessary so it does not all need to be pre-allocated (aside from the total memory allocated for the heap)
<tabemann_> *smallest block pool necessary
<veltas> When you say an array of memory pools, is it like dlmalloc? http://gee.cs.oswego.edu/dl/html/malloc.html
<veltas> Well it uses an array of free block lists, starting from smaller to larger sizes
<tabemann_> aw shite - just realized a problem with what I'm implementing - my memory pool implementation has no size marking, because all the blocks are guaranteed to have the same size
<tabemann_> but for this, because there's multiple block sizes, each block needs to have to have a size field
<tabemann_> and it has to be four bytes in size to maintain alignment
<veltas> Yes sounds like a full heap implementation
<veltas> I personally think a heap is unnecessary for an embedded system, but at the same time I'd expect it in any embedded system at least as an option
<tabemann_> I already have a heap implementation, that's the thing
<tabemann_> I wanted something faster, with better performance in the common case though
<tabemann_> screw it, I'm not going to bother with this
<veltas> How does the current one work?
<veltas> Does it just create a big list of all freed blocks, and allocate from the first it sees the right size and increment the heap end address if not, storing size of each block at front?
<tabemann_> not quite
<tabemann_> it has an array of freed block lists
<veltas> Of different size ranges?
<tabemann_> it would choose a list out of the array based on the logarithm of its size + 1
<veltas> Yeah
<veltas> So that's just dlmalloc without the small block optimisation
<tabemann_> and it has all kinds of logic for splitting and merging blocks
<veltas> Yup
<tabemann_> e.g. if when allocating there is space left over in there rest of the block allocated from, that would be unlinked and then relinked into a new location in the list structure
<veltas> I wrote something like that for the ZX Spectrum for a text editor, it was 'fine'. If it works on a 3.5MHz Z80 it will work in embedded
<veltas> For my spectrum code originally I wanted to write it with compaction (to remove all 'gaps' if there was no free memory left because of fragmentation)
<veltas> But the usage code got way too complicated, you have to keep re-loading your locator in case it's moved! Or be too restrictive about where allocation is allowed
<tabemann_> my heap achieves compaction by merging adjacent blocks
<tabemann_> when blocks are free
<veltas> By compaction I mean moving *allocated* memory
<veltas> You will get fragmentation however you implement it, unless you use fixed size blocks or make other restrictions on usage
<veltas> The funny thing is that the code needed to handle compacting allocations lost me any real benefit I probably would have had, and most of my allocations were small anyway so it didn't really matter
<veltas> I mean size benefit
<veltas> Because on spectrum the code was in RAM
<veltas> On embedded systems would be different in that regard
<veltas> How do you know your allocation needs optimising, is this from profiling?
<tabemann_> it's because it's not very space-efficient for very small allocations
<tabemann_> the minimum block size is 16 bytes, and each block also has a 16-byte overhead