<tankfeeder>
maybe garbage collector on anon cells
<Regenaxer>
hmm, not so big
<tankfeeder>
(gc 200) saves something
<Regenaxer>
Why should cygwin be different?
<tankfeeder>
hpux too
<Regenaxer>
no idea
<tankfeeder>
with (gc 20) crashes
<Regenaxer>
yes, memory
<Regenaxer>
no system calls envolved
<Regenaxer>
But perhaps the heap is destroyed *before* that?
<tankfeeder>
flat list on 1M anons works
<tankfeeder>
let me try 1000 lists on 1000 items
<Regenaxer>
Does anything happen before?
<tankfeeder>
no
<tankfeeder>
just pil start
<tankfeeder>
(grid 1000 1000) breaks it, not for
rob_w has quit [Quit: Leaving]
rob_w has joined #picolisp
<Regenaxer>
It is a heisenbug
<Regenaxer>
as always when gc gives problems
<Regenaxer>
The heap is destroyed somewhere
<Regenaxer>
only pil32?
<tankfeeder>
cygwin and hpux only pil32
<tankfeeder>
itself
<aw->
hi all
<Regenaxer>
Hi aw-
aw- has quit [Quit: Leaving.]
aw- has joined #picolisp
mtsd has joined #picolisp
miskatonic has joined #picolisp
<mtsd>
Good morning all
<beneroth>
hi all
<Regenaxer>
Hi mtsd, beneroth
<mtsd>
Hi Regenaxer
<miskatonic>
hi
<Regenaxer>
Hi miskatonic
<beneroth>
(tell (all) 'hi))
<beneroth>
(incorrect but working)
<mtsd>
:)
<mtsd>
How can I check index sizes? I run things like (iter (tree 'nr '+Obj) '((N) (println (size N)))) to check external objects
<mtsd>
Was thinking about how to do the same for indexes
<mtsd>
To place them in database files and so on
<Regenaxer>
You mean the sizes of the index nodes themselves? They are all of the same size, the specified block size
<beneroth>
well how to find out how many nodes (blocks) an index tree uses?
<Regenaxer>
When a node exceeds this size, it is split into two
<Regenaxer>
hmm, this is not directly visible
<beneroth>
aye
<beneroth>
needs manual traversing through the tree I guess...
<mtsd>
It is not a problem, was just thinking about it the other day
<Regenaxer>
difficult, as the number of entries in a node may differ, eg. according to key lengths and objects
<Regenaxer>
yes
<beneroth>
I thought about it before, too. but haven't yet attempted to come up with a solution.
<Regenaxer>
A recursive iteration is needed, yes
<mtsd>
I use the setup from the demo app today, more or less. small indexes and normal indexes, (2 <indexes>) and (4 <indexes>)
<tankfeeder>
netbsd 7, i386, pil32 doesnt crash
<tankfeeder>
good sign.
<Regenaxer>
I recommend to inspect some trees, eg. with (edit (cdr (root (tree 'nm '+User))))
<Regenaxer>
Then click on the subnodes
<Regenaxer>
with 'K'
<beneroth>
yeah I have a tool for doing that.
<mtsd>
Thanks Regenaxer! I will have a look. It is a good way to learn as well
<Regenaxer>
beneroth, cool!
<beneroth>
but too specific to publish. I basically use (step)
<Regenaxer>
tankfeeder, hmm, but the bug may be there even if it does not crash
<beneroth>
then I print a html list of nodes and their block usage in percent (100% = perfect usage of blocksize)
<beneroth>
handy to spot under-/overusage
<mtsd>
Good idea, beneroth
<beneroth>
I use 4 and 6 as blocksizes for indices. Thought about using 2 too, but then ditched and and stayed with 4 and 6.
<beneroth>
would only fit a small often-changing index into 2
<mtsd>
I am optimising a bit at the moment. An app that has been running for a while and accumulated some data. It is to be extended now, and I am taking the opportunity to look into this
<beneroth>
default page size (aka minimum IO blocksize during read actions? If I get this correctly?) is 4kb, so picolisp blocksize 6. much below that probably doesn't give much performance benefits (but storage and RAM usage benefits) I believe
<beneroth>
someone with more OS knowledge please correct me
<beneroth>
mtsd, very good. good opportunity :)
<Regenaxer>
I also tried 6 and 4
<Regenaxer>
I think unless the index is very big, I always stay with 4
<Regenaxer>
only for very small ones I take 2
<Regenaxer>
user names etc
<beneroth>
I think it is not that bad when index blocksize is bigger than the usual usage.. much worse when the index is splitted over many blocks, no?
<Regenaxer>
6 did not give a measurable advantage
<Regenaxer>
It loads less blocks, but loading is more expensive
<Regenaxer>
I think for bigger block sizes the performance goes down
<Regenaxer>
The bottleneck is disk cache
<mtsd>
I saw this a great opportunity to dig deeper in the database part of pil
<tankfeeder>
openbsd crash
<Regenaxer>
So big block use up more cache, even if only one item is needed
<beneroth>
Regenaxer, thanks
<Regenaxer>
Perhaps very large memory changes it
<beneroth>
I actually haven't indices 6 in use yet I just noticed xD
<mtsd>
Perhaps 2 and 4 is more or less optimal already, for most uses
<beneroth>
mtsd, probably so
<Regenaxer>
yes
<beneroth>
mtsd, I think for storage usage, it is more important to properly assign the objects to fitting block sizes.
<Regenaxer>
tankfeeder, it would be important to know where exactly it crashes
<Regenaxer>
well, it is in gc
<Regenaxer>
but which cell, and who created it?
<tankfeeder>
i cant do it, i dont understand sources
<beneroth>
tankfeeder, maybe use strace to debug
<Regenaxer>
yeah, it is difficult
<Regenaxer>
strace does not help, it is in memory
<Regenaxer>
Indexes: It is more helpful if a critical index is in its own file
<Regenaxer>
then caching is better (near locations)
<beneroth>
I had such a bug once, when I stupidly re-defined a function which was currently executing further calls. when the sub-calls finished, it jumped back into the function which called it, which got redefined -> its cells might got gc'ed -> if so = segfault
<Regenaxer>
yes, I remember that case
<beneroth>
hard to forget. good lesson. *g*
<Regenaxer>
But tankfeeder did not do much before that
<beneroth>
I guess so. he probably wouldn't produce such a stupid bug.
<Regenaxer>
So it sounds nasty :(
<beneroth>
well either a nasty nasty heisenbug. or a very tricky bug in the VM :(
<tankfeeder>
i will try bump gcc on cygwin too and try
<tankfeeder>
last time i failed
<beneroth>
interesting issue
<beneroth>
thanks for investing tankfeeder !
orivej has quit [Ping timeout: 255 seconds]
orivej has joined #picolisp
<tankfeeder>
mingw works!, gcc lower than cygwin's - 6.3.0 vs. 6.4.0
<tankfeeder>
this is good sign.
<Regenaxer>
So the evil point is perhaps 6.4.0?
<tankfeeder>
unknown
<Regenaxer>
m_mans, yesterday you asked something about Telegram interface
rgrau has quit [Ping timeout: 256 seconds]
rgrau has joined #picolisp
<rick42>
hi all
<Regenaxer>
Hi rick42
<rick42>
wow lol | <Regenaxer> One more reason to write everything in Asm!!
<rick42>
hi Reg!
<Regenaxer>
:)
<beneroth>
Regenaxer is right :)
<beneroth>
hi rick42 o/
<rick42>
either you write the asm or some program does, and in the latter case, you wrote it or somebody else did (and in this case are most likely very ignorant of how it works (true for /me))
<rick42>
beneroth: o/
<rick42>
or the hw could have a bug ... but let's not go there (enough has been said lately :)
<Regenaxer>
T
<beneroth>
rick42, Turtles upon Turtles! better built the hardware too!
<beneroth>
(pilOS calling...)
<rick42>
:)
rob_w has quit [Quit: Leaving]
orivej has quit [Ping timeout: 248 seconds]
<cess11>
interesting heisendebugging
Kev1n has joined #picolisp
Kev1n has quit [Remote host closed the connection]
jibanes has quit [Ping timeout: 240 seconds]
jibanes has joined #picolisp
<tankfeeder>
solaris x64 11.3 gcc 4.5.2, crash
<tankfeeder>
just checked.
<Regenaxer>
So the gcc theory breaks?
<Regenaxer>
"x64" - So it is pil64 here? No gcc at all?
<tankfeeder>
x64 as platform
<tankfeeder>
pil32
<Regenaxer>
ah, ok
<tankfeeder>
still gcc
<tankfeeder>
i belive in picolisp
<Regenaxer>
I doubt it is the compiler's fault
<Regenaxer>
hmm, mysterious
<tankfeeder>
yea
Regenaxer has left #picolisp [#picolisp]
Regenaxer has joined #picolisp
<Regenaxer>
Wanted to say:
<Regenaxer>
Strange thing is that *this* specific code only gives the problem
<Regenaxer>
pil32 is around a long time before that
orivej has joined #picolisp
<beneroth>
tankfeeder, is maybe "pil32 on x64 OS" a common pattern?
<rick42>
alleluia! :) | <tankfeeder> i belive in picolisp