rob_w changed the topic of #picolisp to: PicoLisp language | Channel Log: https://irclog.whitequark.org/picolisp/ | Check also http://www.picolisp.com for more information || Attention due to latest spam floods this channel will only allow registered users to send messages - check https://freenode.net/kb/answer/registration
pierpal has quit [Ping timeout: 252 seconds]
Natechip has joined #picolisp
Natechip has quit [Remote host closed the connection]
ubLIX has quit [Quit: ubLIX]
jibanes has quit [Ping timeout: 244 seconds]
jibanes has joined #picolisp
jibanes has quit [Ping timeout: 268 seconds]
jibanes has joined #picolisp
pierpa has quit [Quit: Page closed]
_whitelogger has joined #picolisp
aw- has quit [Quit: Leaving.]
macker15 has joined #picolisp
macker15 has quit [Remote host closed the connection]
orivej has quit [Ping timeout: 252 seconds]
<razzy> Regenaxer: good way how to make skiplist? https://en.wikipedia.org/wiki/Skip_list
<Regenaxer> Sure. List of lists
ubLIX has joined #picolisp
<razzy> Regenaxer: will picolisp swap them properly? it should
<razzy> hmm
orivej has joined #picolisp
ubLX has joined #picolisp
ubLIX has quit [Ping timeout: 252 seconds]
<Regenaxer> swap? Have you looked at 'swap' or 'xchg'?
<razzy> Regenaxer: i mean, would the long nested lists live in RAM? or would they get swapped in HDD if they are not needed
<Regenaxer> Lists always live in RAM
<Regenaxer> They get swapped if the OS does so
<Regenaxer> Again, preliminary worries!
<razzy> i am guilty
<Regenaxer> A list of one million elements is unusable for other reasons, but takes only 16 MiB
orivej has quit [Ping timeout: 268 seconds]
<razzy> thx
<razzy> Regenaxer: skiplist is something different than just list of lists
<razzy> skiplist have same data in different nested lists
<Regenaxer> Still just lists of lists, no?
<Regenaxer> (in pil everything is a list anyway ;)
<razzy> yes but i do not know ho to handle adressing
<razzy> it is ctl magic
<Regenaxer> ctl = control?
<razzy> i think it is some kind of pointer in picolisp lists
<Regenaxer> lists consist *only* of pointers
<Regenaxer> and thus s-exprs in general, with the exception of short numbers
<Regenaxer> doc64/structures
<Regenaxer> short numbers and the DIG part of big numbers
ubLX has quit [Quit: ubLX]
pierpal has joined #picolisp
pierpal has quit [Read error: Connection reset by peer]
orivej has joined #picolisp
<razzy> does "(NIL (< N 3) (printsp 'enough))" have special meaning in whole picolisp? or just in escape in (for *) function
<Regenaxer> It is a special syntax 'for' but also 'do' and 'loop'
<Regenaxer> NIL *looks* like a function call here, but it is not
<razzy> i know
<Regenaxer> ok
<razzy> but picolisp interpreter could look for patterns similiar to functions and make apropriate choices
<razzy> it could be usefull to jump out of current list
<Regenaxer> you cannot jump out of a list, only out of a loop
<Regenaxer> The above pattern does not work in 'while' or 'until' if you mean that
<razzy> maybe jump from current function? it is clearer expression
<Regenaxer> Also not. There is no way to jump out of a function in pil
<Regenaxer> You can jump out of more deeply nested structures with 'throw'
<Regenaxer> So yes, with 'throw' you jump out of one or many functions
<razzy> is it not basicly the same?
<Regenaxer> The same as what?
<razzy> as jumping out of for
<Regenaxer> I mean, 'throw' is the only way to jump in this sense
<razzy> i will read
<Regenaxer> 'for' is left with NIL or T in a more controlled way I woud say
pierpal has joined #picolisp
<Regenaxer> ah, and 'yield' is also like 'throw', ie "jumping"
<razzy> thx
<razzy> is (I . L) special notation for I line-number of the cell?
<razzy> i guess i works only in for function
<Regenaxer> correct. Special syntax only in 'for'
<razzy> but this one i find awesome :]
<Regenaxer> useful :)
<razzy> do coroutines for 64bit pil work on several cores?
<Regenaxer> no, they run in the same process, so only on a single core at a given moment
pierpal has quit [Quit: Poof]
<razzy> but one core processing several lisp threads at one time yes?
pierpal has joined #picolisp
<Regenaxer> One core can run one process at a time
<Regenaxer> or one thread, but pil has no threads in that sense
<Regenaxer> coroutines are logically parallel, not physically
<razzy> hmm, will it show in debuger?
<razzy> i thought that you cramp several lisp instructions in one 64bit real CPU instruction
<razzy> idk how it works
<Regenaxer> I *planned* for PilOS to execute several primitives of a single lisp functions in parallel on several cores
<Regenaxer> eg doing things on the stack and in heap parallel
<Regenaxer> not sure if this would be faster at all, as bottleneck is the buses
<Regenaxer> ie memory access
<Regenaxer> There are no "lisp instructions"
<razzy> i do not understand coroutines, are not logically parallel instructions extremelly rare?
<Regenaxer> and one lisp functions consists of many many 64bit instructions
<Regenaxer> They are useful sometimes
orivej has quit [Ping timeout: 252 seconds]
<razzy> http://rosettacode.org/wiki/Generator/Exponential#PicoLisp like doing 2 powers at the same time?
<razzy> how do it recognise?
<Regenaxer> A more typical example is traversing several trees at the same time, asyncronously
<Regenaxer> recursive traversal
<razzy> and you could cramp more of them in one CPU core?
<razzy> awesome :]
<Regenaxer> What do you mean?
<Regenaxer> Pil does not deal with cores, the OS does
<razzy> ok, i have vague, propably wrong idea i am satisfied with
<Regenaxer> And there is nowhere something which cramps something else into cores (?)
<Regenaxer> A single core always executes maximally one thread or process
<razzy> yes
<Regenaxer> An example for parallel traversing trees:
<Regenaxer> http://ix.io/1lcp
x49F has joined #picolisp
x49F has quit [Remote host closed the connection]
<razzy> how do i get usefull data from forked picolisp
<Regenaxer> Use 'pipe'. Or IPC via 'tell', but this only *sends' to other processes
<Regenaxer> A convenient function based on 'pipe' is 'later'
<Regenaxer> afp
<razzy> imagine parallel bubble sort.
<razzy> he he
<razzy> later float my boat a little
<Regenaxer> ret
razzy has quit [Read error: Connection reset by peer]
razzy has joined #picolisp
<razzy> hm
<Regenaxer> parallel bubble sort is neither as easy nor as useful as you may think ;)
<Regenaxer> To do something in parallel, the data should either be independent, or you need to lock (synchronize) things which makes parallelism meaningless
<razzy> Regenaxer: does picolisp have lock on variables?
<razzy> it has lock on file streams
<Regenaxer> Makes no sense
<Regenaxer> yes
<Regenaxer> As there are no threads, variable locks are meaningless
<razzy> yes
<Regenaxer> The DB locks
<razzy> but if everything is symbol, it should not be problem to use file lock on variables
<Regenaxer> no problem, but useless
<razzy> ah,.. i know now,..
<Regenaxer> No variable can be modified asynchronously by another process
<razzy> i cut list and make independent bubblesort
<Regenaxer> (tell <pid> 'setq '*Variable 12) will set a var in another process
<Regenaxer> but syncronously
<Regenaxer> ok
<Regenaxer> you can do that with 'later'
<Regenaxer> The example in the ref executes something on all members of a list
<Regenaxer> a kind of parallel mapcar
<razzy> ah
<Regenaxer> The example is a bit meaningless though
<razzy> yop
<Regenaxer> :)
<Regenaxer> No sense to parallize (* N N)
<razzy> bubblesort is good example to paraelise
<Regenaxer> hmm, I'm not sure. Can you distribute the data?
<razzy> Regenaxer: you example (* N N) is better just had to read it again
<Regenaxer> ok :)
<Regenaxer> afp
<razzy> Regenaxer: you have better example, but not that much readable. i had to (read) it to comprehend. (later really float my boat. i prefer this version of example https://ptpb.pw/3jSB
<razzy> how much overhead (later have?
<tankf33der> i’ve implement map-reduce on coroutines
<tankf33der> razzy: check this out too
<razzy> i really think that coroutines, tasks, should be in separate library
<Regenaxer> ret
<Regenaxer> coroutines and tasks cannot be in a separate library, they go deep into the interpreter core
<razzy> tankf33der: now i understand better coroutines
<razzy> (later) uses fork?
jibanes has quit [Ping timeout: 272 seconds]
<razzy> (later) uses (fork)? (co)?
<razzy> i guess fork
jibanes has joined #picolisp
<Regenaxer> It uses 'pipe', which does fork() inteanally
<Regenaxer> : (pp 'later)
<Regenaxer> (task
<Regenaxer> (macro
<Regenaxer> (de later ("@Var" . "@Prg")
<Regenaxer> (pipe (pr (prog . "@Prg")))
<Regenaxer> (setq "@Var" (in @ (rd)))
<Regenaxer> (task (close @)) ) )
<Regenaxer> "@Var" )
<Regenaxer> -> later
<razzy>
<razzy> -> pipe
<razzy> (de pipe 724692000 . 724692000 )
<razzy> : (pp 'pipe)
<razzy> :
<Regenaxer> : (vi 'pipe)
<Regenaxer> or (em 'pipe) I think
<Regenaxer> 724692000 means it is a function pointer to a built-in
<razzy> aha!
<razzy> learning here
<Regenaxer> See doc/ref.html#ev
<Regenaxer> Under "What is an executable function?"
helloworld has joined #picolisp
<razzy> rule of thumb, how big your code should be to gain advantage when running through (later) function?
<Regenaxer> Not the size of the code, but how long it takes to run it
<Regenaxer> I used it a lot for remote queries, with network delay etc
<razzy> usefull yes,.. but not now :]
helloworld has left #picolisp [#picolisp]
helloworld has joined #picolisp
<Regenaxer> Take a look at misc/fibo.l
<Regenaxer> there is a parallelized version, using 'later'
helloworld has left #picolisp [#picolisp]
<razzy> Regenaxer: imho it takes too much time to use later
<Regenaxer> I think it is very efficient
<Regenaxer> or you mean programmer's time?
<razzy> it start whole new pil proces with whole initialisation, yes?
<Regenaxer> (this is what counts most imho)
<razzy> it is
<Regenaxer> New process yes, but no initialization
<Regenaxer> copy on write
<Regenaxer> Very fast in Unix
<Regenaxer> Not slower than threads
<razzy> copy on write?
<Regenaxer> (internally the same)
<Regenaxer> yes, nothing is copied unless modified
<Regenaxer> the example (* N N) copies nothing on the OS level
<razzy> aaaa, *clever girl*
<Regenaxer> Again, you worry too early. Just try it practically instead of theorizing
<razzy> i theorized most of my life, bad habit
<Regenaxer> :)
<Regenaxer> Best is a good mixture of both
<razzy> how long it take to build a thread?
<Regenaxer> No idea. The kernel copies the internal process structure (same as in fork())
<Regenaxer> Only a few bytes I think (few hundred perhaps)
<Regenaxer> and a new entry in the process or thread table
<Regenaxer> For some reason people are unbelievably afraid of forks but not of threads
<razzy> it is magic :]
<Regenaxer> Forked processes are better in some regards, they have their private memory
<Regenaxer> "immutable" to stay with the buzz
<Regenaxer> well, not immutable, but protected
<razzy> threads you could move to another machine
<razzy> no magic involved
<Regenaxer> nope
<Regenaxer> you can start processes on another machine
<Regenaxer> another advantage
<Regenaxer> threads must be on the same
<Regenaxer> I used distributed DBs with up to 70 proceses plus children on several remote machines
<razzy> well general knowledge says you are right
orivej has joined #picolisp
ubLIX has joined #picolisp
pierpal has quit [Quit: Poof]
<razzy> it is not so bad
pierpal has joined #picolisp
pierpal has quit [Quit: Poof]
pierpal has joined #picolisp
<razzy> i am surprised, my memory does not bloat when computing fibonachi
<Regenaxer> fibo uses almost no memory
<Regenaxer> or none
<Regenaxer> only stack for recursion
razzy has quit [Ping timeout: 268 seconds]
razzy has joined #picolisp
<razzy> i am impressed :] very clever interpreter. soooo, to make it properly bloat, lets alocate some lists :]
<Regenaxer> Good :)
<razzy> the interpretter looks like monster on its own :]
<razzy> more like slick elusive sprite
<razzy> i had feeling, that the interpretter throw away code that is of no use to result
<razzy> it would be scary
fireworks15 has joined #picolisp
fireworks15 has quit [Remote host closed the connection]
razzy has quit [Ping timeout: 264 seconds]
razzy has joined #picolisp
ubLIX has quit [Quit: ubLIX]
<Regenaxer> razzy: Where did you see that code was thrown away?
<razzy> Regenaxer: for example, when do you loading libraries, are you loading whole code to RAM? of just what is needed
* razzy is trying to bloat his code, with little luck
<razzy> well i am happy, i was able to make work with memory 2 times faster
<Regenaxer> Sorry, we had guests
<Regenaxer> In 'load', the whole file is passed through a REPL
<Regenaxer> ie each expression is read, evaluated, and thrown away
<Regenaxer> so it is in fact not a REPL, but a REL
<Regenaxer> (no print, print happens only in interactive mode)
<Regenaxer> interactive mode = stdin is a tty
<razzy> no problems with guests
<razzy> i consider IRC asynchronous, not reliable communication
<Regenaxer> T
<Regenaxer> relaxed
pierpal has quit [Quit: Poof]
pierpal has joined #picolisp
freemint has joined #picolisp
<freemint> will somebody be at froscon tomorrow?
freemint has quit [Quit: Leaving]
ubLIX has joined #picolisp
viaken has quit [Quit: WeeChat 2.1]
viaken has joined #picolisp
siniStar has joined #picolisp
siniStar has quit [Remote host closed the connection]
alexshendi has joined #picolisp