ChanServ changed the topic of #picolisp to: PicoLisp language | Channel Log: https://irclog.whitequark.org/picolisp/ | Check also http://www.picolisp.com for more information
wineroots has joined #picolisp
mtsd has joined #picolisp
mtsd_ has joined #picolisp
mtsd has quit [Ping timeout: 265 seconds]
mtsd__ has joined #picolisp
mtsd_ has quit [Ping timeout: 260 seconds]
mtsd__ has quit [Ping timeout: 252 seconds]
aw- has joined #picolisp
mtsd has joined #picolisp
orivej has joined #picolisp
rob_w has joined #picolisp
mtsd has quit [Remote host closed the connection]
mtsd has joined #picolisp
mtsd_ has joined #picolisp
mtsd has quit [Ping timeout: 252 seconds]
mtsd_ has quit [Ping timeout: 240 seconds]
orivej has quit [Ping timeout: 240 seconds]
orivej has joined #picolisp
aw- has quit [Quit: Leaving.]
xkapastel has joined #picolisp
mtsd has joined #picolisp
<cess11> i should probably learn how 'co works, usually i just wrap 'fork with a function and log to a file
<Regenaxer> Both fork and vo are useful independently
<Regenaxer> I use 'co' more with 'task's
<tankf33der> co is cool and trivial, on russian 5min task to describe the core idea.
<cess11> yeah, from the docs it looks simple but i'd need to figure out some stuff around it, like how to feed the coroutines new workloads and tune them
<cess11> in most of my use cases it would be a move from interactive to unattended
<Regenaxer> right, eg in tasks
<Regenaxer> : (co 'inc (loop (tty (println (inc (0)))) (yield)))
<Regenaxer> 1
<Regenaxer> -> NIL
<Regenaxer> : (task -6000 0 (co 'inc T))
<Regenaxer> -> (-6000 0 (co 'inc T))
<Regenaxer> 2
<Regenaxer> 3
<Regenaxer> :
<Regenaxer> But most useful for me are coroutines in printing nested structures to multi-page SVG documents
<Regenaxer> Cause this implies asynchrnously iterating two separate trees
<Regenaxer> Impossible without some kind of generator for the printed items while generating a nested SVG tree for each page
<Regenaxer> And a generator is most easy in a coroutine
<cess11> recent years i've had two use cases where parallellisations was crucial, in one i had to do very fast dumping and parsing of web sites, basically someone wanted me to build a register over every person involved in a certain political movement
<cess11> there was no rate limiting except for a few web sites so those got special treatment, the others i dumped as fast as i could get the hrefs and write to disk, then wrote parsers and fed the parser and output from shell find to forks and kept 'cut-ing at a pace that kept CPU close to saturated on my workstation
<cess11> in another case i had to run tens of thousands of queries against a rdbms cluster with a minimum of locking per client session and since db client spin up and shutdown took the most time it was fastest to feed queries into forking around 'call
<Regenaxer> OK, but coroutines are not useful for parallelization
<Regenaxer> then 'fork' and 'pipe' and derivates like 'later' are better
<cess11> i know, but i also have some stuff i want to do async that is sort of longrunning and i still fork and run until the content in a file named with *Pid changes and catch output in some log files, i think coroutines would be a better option there
<Regenaxer> yeah, good
<cess11> i think i've gotten the hang of it, will need to group coroutines in pil processes and adjust config files but it seems straightforward
<Regenaxer> yes
<Regenaxer> Please ask here
mtsd has quit [Ping timeout: 240 seconds]
rob_w has quit [Read error: Connection reset by peer]