dave0 has quit [Quit: dave's not here]
rdrop-exit has joined #forth
cp- has quit [Quit: Disappeared in a puff of smoke]
<rdrop-exit> good morning Forthwrights c[]
cp- has joined #forth
pareidolia has quit [Ping timeout: 258 seconds]
pareidolia has joined #forth
Zarutian_HTC has quit [Read error: Connection reset by peer]
Zarutian_HTC| has joined #forth
<tabemann> hey guys
<rdrop-exit> hi tabemann
<tp> hey Forthings, greetings from Embeddedville
<rdrop-exit> greetings tp
<tabemann> hey rdrop-exit, tp
<tp> hey tabemann, I got stuck on backporting changes and have just got back to the bit testing doc, then I'll be testing your Zeptoforth dissasembler to GAS with the aim of adding RTS handshaking to the F407 disco
<tabemann> cool
<tabemann> I haven't actually tested my gas mode yet against a real live assembler
<tp> tabemann, I'll definitely be doing that and documenting how I go
<tp> tabemann, if you have a minute, check out my "Forth Project Tarball" style of project release, your chance to tell me what could be improved ?
<tp> tabemann, or just flame me with 'thats sucky d00d3"
<tabemann> lol
<tp> tabemann, 6 years of Forth use and this is the general purpose format Ive arrived at. Theoretically any Mecrisp-Stellaris user can develop with it no matter what system they use
<tp> tabemann, it's all created automatically from a working target by running one shell script
<tabemann> one typo: under License "Licensealong" should be "License along"
<tp> oops, fixed :)
<rdrop-exit> "ANS Forth is, to the extent possible, a <it>syntactic specification</it>" -- Jack Woehr
<rdrop-exit> This in nutshell is why so many Forthers ignore the standard.
reepca has quit [Read error: Connection reset by peer]
reepca has joined #forth
_whitelogger has joined #forth
xek_ has joined #forth
xek has quit [Ping timeout: 265 seconds]
dddddd has quit [Ping timeout: 265 seconds]
gravicappa has joined #forth
dave0 has joined #forth
dys has joined #forth
boru has joined #forth
xek_ has quit [Remote host closed the connection]
xek has joined #forth
xek_ has joined #forth
xek has quit [Ping timeout: 246 seconds]
xek_ has quit [Remote host closed the connection]
andrei-n has joined #forth
xek has joined #forth
cantstanya has quit [Read error: Connection reset by peer]
cantstanya has joined #forth
reepca has quit [Remote host closed the connection]
reepca has joined #forth
dddddd has joined #forth
TCZ has joined #forth
tolja has quit [Remote host closed the connection]
iyzsong has joined #forth
_whitelogger has joined #forth
rdrop-exit has quit [Quit: Lost terminal]
Zarutian_HTC| has quit [Ping timeout: 256 seconds]
Zarutian_HTC has joined #forth
Zarutian_HTC has quit [Client Quit]
X-Scale` has joined #forth
X-Scale has quit [Ping timeout: 265 seconds]
X-Scale` is now known as X-Scale
TCZ has quit [Quit: Leaving]
X-Scale has quit [Ping timeout: 272 seconds]
X-Scale` has joined #forth
X-Scale` is now known as X-Scale
dave0 has quit [Quit: dave's not here]
iyzsong has quit [Quit: ZNC 1.7.1 - https://znc.in]
jsoft has quit [Ping timeout: 265 seconds]
dys has quit [Ping timeout: 260 seconds]
dddddd has quit [Ping timeout: 264 seconds]
andrei-n has quit [Quit: Leaving]
andrei-n has joined #forth
andrei-n has quit [Quit: Leaving]
andrei-n has joined #forth
andrei-n has quit [Remote host closed the connection]
andrei--n has joined #forth
<veltas> Is there actually a standard forth word that lets you implement the divison required for # ?
<veltas> I need to be able to do a division + remainder like: ( ud n+ -- ud n+ ) giving result of dividing and remainder
<veltas> Or better
<veltas> The standard division stuff for double integers is signed (not equivalent under division) or won't fit in the result in general
andrei--n has quit [Quit: Leaving]
andrei-n has joined #forth
<tp> u/mod ( u1 u2 - - u3 u4 ) 32/32 = 32 rem 32 Division u1 / u2 = u4 remainder u3
<tp> mod ( n1 n2 - - n3 ) n1 / n2 = remainder n3
<tp> any of those useful ?
<veltas> Thanks tp but no those are single cell integers
<tp> um/mod ( ud u1 - - u2 u3 ) ud / u1 = u3 remainder u2
<veltas> Unless I am being maths dumb and there is a mathematical way of using that to do larger division
<tp> ud/mod ( ud1 ud2 - - ud3 ud4 ) 64/64 = 64 rem 64 Division ud1 / ud2 = ud4 remainder ud3
<tp> m/mod ( d n1 - - n2 n3 ) d / n1 = n3 remainder r2
<tp> d/mod ( d1 d2 - - d3 d4 ) d1 / d2 = d4 remainder d3
<tp> d/ ( d1 d2 - - d3 ) d1 / d2 = d3
<veltas> ud/mod is perfect, not standard. I am going to implement UD/MOD or a variant of UM/MOD that doesn't lose precision
<tp> what about fixed point ?
<tp> d+ ( df1 df2 - - df3 ) Subtraction of two fixpoint numbers
<tp> oops
<tp> f/ ( df1 df2 - - df3 ) Division of two fixpoint numbers
<veltas> Floating point right?
<veltas> My base system will not contain floating point words, that's an elective or a build option
<veltas> But # is a core word (and rightly so), it's just annoying that they don't define a standard word that's suited for doing the calculation!
<tp> no, fixed point
<veltas> I won't support fixed point in the base system (if at all)
<tp> I only use fixed point, I'll never use floating point
<veltas> Yeah I've noticed hardware uses fixed point a lot
<tp> almost exclusively, for one thing it's more accurate
<veltas> Is it now? :)
<tp> it's crazy, I have more accuracy on a stm32 than on my pc
<tp> sure
<veltas> And also found that most people don't understand the standard C functions for correctly loading/storing fixed point into floating point so that has caused issues in reviews
<veltas> Where people are like "er what is this function and what is happening here?"
<tp> well thats C for ya :P
<veltas> It's not C it's just people who don't understand floating point
<veltas> The double precision floating point numbers you get on modern PCs are capable of accurately representing all the fixed point numbers I've come across sampled from hardware
<tp> it's a confusing topic for those that havent researched the precision of floating point vs fixed point I agree
<tp> I think it's C
<tp> Forth people tend to understand the difference
<veltas> But yes obviously if the size(mantissa)+1 is shorter than a fixed point numerator or divisor then you will lose accuracy going to floating point
<tp> C people have the same unquestioning trust in floating point as they do in their compiler
<veltas> My experience is that people are very mistrusting of floating point and treat it like a magic box that sometimes gives weird inaccurate results
<tp> then there is also the fact that you cant represent 0.1 accurately in binary anyway
<tp> oh ok, well you'd have a lot more exposure there than me
<veltas> If you understand how it works and what's guaranteed then it's better, you start knowing that floating point works well but it's not a "cure all" to accuracy because nothing is
<veltas> It's like everything in programming, you need to actually understand it
<tp> my understanding is that there are two issue with floadting point. 1) lack of precision, 2) the usual decimal to binary issues
<tp> agreed
<tp> and Forth people understand the differences in my experience
<tp> perhaps because (in many cases) they actually write their systems
<veltas> Usually if you expect accurate decimals the solution is to change units
<veltas> i.e. in currency store cents
<tp> we do have words that utilise the F4 floating point processor, but they were user submitted and came years after the default Mecrisp-Stellaris which uses fixed point
<cmtptr> i've recently developed the opinion that there is no valid use case for floating point
<veltas> cmtptr: I wouldn't use anything else for computer graphics
<veltas> See: Playstation 1 wobbling because it uses fixed point graphics
<tp> sure Veltas, but we tend to call that system 'scaling' in Forth
<veltas> Actually it doesn't use fixed point, really it uses integers, so that is not fair
<veltas> Yeah it's more like 'scaling' as you say
<cmtptr> fixed point is integers. they're the same thing, you just have an implied divisor built into your arithmetic
<tp> cmtptr, how about to sell MCU's with a FPU against your competition without it ?
<cmtptr> what "wobbling" are you talking about? i never owned a playstation 1
<cmtptr> tp, well yeah, once we start talking about marketting there's a use-case for everything
<cmtptr> because some moron out there wants it
<cmtptr> and it's a sin to let a fool keep his money
<tp> cmtptr, thats the only use case I can think of also
<veltas> cmtptr: Look at footage of the Playstation 1, when things move they kind of 'snap' to positions in 3D space, although it is a bit subtle, eventually it is very noticable
<tp> veltas, you just had 100Hz noise in your video ;-)
<veltas> My video?
<tp> veltas, I assume you had a PS1 ?
<veltas> Anyway I am not here to argue about fixed point, scaling, etc, that's all totally fine and has its purpose. I will probably have limited support for fixed point arithmetic since I'm on an 8-bit machine anyway
<veltas> tp: No I didn't
<cmtptr> veltas, that's a precision problem. we solved that decades ago by adding more bits
<veltas> Adding more bits doesn't solve it because 3D graphics usually comes with the requirement you can zoom in and out without getting huge errors in your calculations
<cmtptr> does it? there's usually a near and far clipping plane, so you know the bounds of your scene
<veltas> Not 'zoom' necessarily but get close to a wall and I don't expect the wall to start jumping around weirdly on my screen
<cmtptr> yeah, so your near clipping plane is the worst case scenario for that. design your units with that in mind
<cmtptr> the alternative is that you use floats for spacial positioning, which then is weird because things start getting different precision the further they are from an arbitrary origin point in space
<cmtptr> the only reason you tend to not notice it is because GPUs and games today have floats with more than enough bits to hide it
<tp> cmtptr, do PSx have much higher precision FP units than X86 ?
<cmtptr> I have no idea
<cmtptr> but I will say that doom used fixed point and I never noticed this wobbling that he's talking about there
<veltas> X86 is actually quite a high precision but you are not going to be calculating on your CPU anymore
<veltas> Doom doesn't have the most general-purpose graphics engine
<cmtptr> well, that's true
<cmtptr> look, it was good enough for granddad, and it's good enough for me!
<veltas> Talking of noise....
<tp> which of these is right ?
<tp> 9.12 / 7.13
<tp> 1.27910238429172510518
<tp> 1,27910238411277532577514648437500
<tp> the lower one is from Mecrisp-Stellaris, the first one is from this pc using (BC)
<tp> cmtptr, I played tons of doom and never once noticed wobbling either
<veltas> 1.27910238429172510518934081346423562412342215988779803646563814866760168302945301542776998597475455820476858345021...
xek has quit [Quit: Leaving]
<veltas> According to Wolfram Alpha so I am assuming that is infinite precision, don't let me down WA
<tp> veltas, did you just do that in your head ?
<veltas> I could do it on paper
<veltas> But I'm randomly inaccurate :P
<tp> i should try that on a M4 with FPU
jsoft has joined #forth
<tp> I'll do the test when I'm next working on tabemann's Zeptoforth
<tp> as thats made for a M4
<veltas> tp: If you want fixed point to shine then choose a scenario where you are using almost all the bits in the fixed point, and add them together or multiply if they're both near 1
<veltas> Then it will be more accurate
<veltas> If that sounds very specific and not useful in general then you have just realised why people use floating point
<tp> veltas, thats a bit general
<veltas> Fixed point is a better fit for sensors where you know the accuracy in advance, and it's simpler to expose via a binary register
<tp> veltas, there are billions of embedded devices in daily use that dont use fp
<veltas> They're not doing general-purpose scientific calculations though, are they? If you know the terrain you can get fixed point to work just fine for your small device.
<tp> veltas, which is because small embedded devices dont have FPU's and it's too costly to implement in software
<veltas> Giving a sensor more arbitrary control of the decimal point is useless because the accuracy is roughly fixed
<veltas> Yeah
Guest38566 has quit [Quit: leaving]
WilhelmVonWeiner has joined #forth
<tp> veltas, did the HP scientefic calculators have FPU's ?
<tp> I'm pretty sure they didnt
<veltas> No, but they were meant for general-purpose scientific calculations
<veltas> And floating point is better at that than fixed point, as you just saw with that simple example
<veltas> And it doesn't matter that it's slow because 'slow' for a manually entered calculation is usually under 1 second, acceptable for a calculator
<veltas> So now we've talked about floating and fixed point, my question is: do *you* have a point? :P
<tp> veltas, no, I only do embedded work
<tp> veltas, and my hardware does not have a FPU
<veltas> Some mechanical and physical calculations in embedded computers probably do require floating point, very rare though, especially if what you do is linear and with limited accuracy anyway.
<veltas> I imagine GPS probably needs floating point in there somewhere
<veltas> GPS doesn't work properly without special relativity calculations if I remember rightly
<tp> veltas, can you clarify for me, are you claiming that floating point has some special feature that makes it more accurate than fixed point ?
<veltas> Yes the special feature is being able to choose what scale the mantissa represents dynamically
<tp> veltas, it seems to me that when one designs something they study the required precision and then implement a system that provides that ?
<veltas> So in general it tends to be more accurate, even though fixed point can be more accurate in specific instances.
<veltas> I'm not saying it's always more accurate because obviously it isn't, there are situations where it is less accurate
tolja has joined #forth
<MrMobius> HP scientific's definitely didnt have FPU
<MrMobius> the wikipedia article on the Saturn processor used in the later scientific and most of the graphing models is really interesting
<veltas> You can tell when you press = and it freezes in front of you calculating the answer for about 0.3s
<MrMobius> native BCD with up to 15 digits iirc. assembly instructions set the number of decimals
<veltas> x86 has native BCD
<veltas> Well, 'native'
<MrMobius> heh, not sure what youre calculating but any built in function on one of those HPs will be a lot faster than 0.3 seconds
<veltas> I don't know trigonometry or something
<veltas> Z80 has a BCD adjustment instruction, and it only works with 8-bit calculations, so it has 'support' in a sense but it's rubbish
<veltas> I personally think BCD is kind of cool but pretty useless in practice
<MrMobius> its not rubbish at all
<veltas> I mean in comparison to plain binary
<MrMobius> it would take you at least 5x more cycles to do the correction otherwise
<MrMobius> 0 90 FOR I I SIN NEXT takes about 2 seconds on my simulated hp 48gx running at authentic speed
<MrMobius> so more like 0.02 seconds for trig including loop overhead
<MrMobius> BCD makes sense for a general purpose machine. some problems need BCD and just about anything that can be done in binary will also work in BCD so it makes sense to use that for everything
<tp> bcd is the only way to obtain hi precision decimal calculations iirc ?
<MrMobius> its not more accurate
<MrMobius> it just rounds the same as a human does
<MrMobius> like if you do 1 million $0.01 transactions into a bank account you need it to be $10,000 exactly
<MrMobius> you would eventually get rounding errors in binary adding a small number like that repeatedly
<tp> yes
<MrMobius> of course you could also have your mantissa in binary but limit it to some power of 10 like having it 32 bit and limiting it to 999,999,999 but then you have to contantly check and adjust
<tp> ahh
<MrMobius> hence the appeal of the "rubbish" decimal correction
<veltas> > t = 0; for i = 1, 1000000000 do t = t + 0.01 end; print(string.format("%.2f", t))
<veltas> 9999999.83
<veltas> > t = 0; for i = 1, 1000000000 do t = t + 1 end; print(string.format("%.2f", t/10))
<veltas> 100000000.00
<MrMobius> whereas they would be identical if you did the calculation in BCD
<veltas> ........
<MrMobius> on the other hand, if you replaced the 0.01 with something you had calculated that doesnt divide evenly by 10, you would have the corect answer in binary but the wrong answer in BCD
<tp> lets all just use scaling with integers ?
<tp> bbl, freezing here, warm bed beckons!
dddddd has joined #forth
jsoft has quit [Ping timeout: 258 seconds]
Zarutian_HTC has joined #forth
reepca has quit [Read error: Connection reset by peer]
reepca has joined #forth
reepca has quit [Read error: Connection reset by peer]
reepca has joined #forth
jsoft has joined #forth
andrei-n has quit [Quit: Leaving]
malyn has quit [Quit: ""]
reepca has quit [Remote host closed the connection]
reepca has joined #forth
gravicappa has quit [Ping timeout: 256 seconds]