FreeBirdLjj has quit [Remote host closed the connection]
FreeBirdLjj has joined #lisp
scymtym has quit [Ping timeout: 248 seconds]
FreeBirdLjj has quit [Ping timeout: 246 seconds]
cosimone has quit [Quit: WeeChat 2.4]
linack has joined #lisp
mindCrime has joined #lisp
mindCrime has quit [Client Quit]
mindCrime has joined #lisp
mindCrime has quit [Read error: Connection reset by peer]
mindCrime has joined #lisp
FreeBirdLjj has joined #lisp
cosimone has joined #lisp
scymtym has joined #lisp
themsay has joined #lisp
mindCrime has quit [Remote host closed the connection]
mindCrime has joined #lisp
zotan has quit [Quit: ZNC 1.6.5+deb1+deb9u1 - http://znc.in]
zotan has joined #lisp
rpg has joined #lisp
PuercoPop has quit [Quit: ZNC 1.6.3 - http://znc.in]
ggole has quit [Quit: Leaving]
actuallybatman has joined #lisp
cosimone has quit [Ping timeout: 257 seconds]
FreeBirdLjj has quit [Remote host closed the connection]
FreeBirdLjj has joined #lisp
cosimone has joined #lisp
aindilis has joined #lisp
FreeBirdLjj has quit [Ping timeout: 245 seconds]
pankajgodbole has joined #lisp
Boubert has joined #lisp
flazh has quit [Ping timeout: 245 seconds]
Boubert has quit [Client Quit]
linack has quit [Ping timeout: 268 seconds]
moldybits has quit [Read error: Connection reset by peer]
shifty has quit [Ping timeout: 245 seconds]
moldybits has joined #lisp
saravia has quit [Remote host closed the connection]
<didi>
adlai: Thank you.
clothespin has joined #lisp
pfdietz has quit [Ping timeout: 256 seconds]
ebrasca has joined #lisp
mindCrime_ has joined #lisp
mindCrime has quit [Ping timeout: 246 seconds]
Jesin has quit [Quit: Leaving]
Jesin has joined #lisp
clothespin has quit [Remote host closed the connection]
cosimone has quit [Quit: WeeChat 2.4]
refpga has joined #lisp
v88m has joined #lisp
m00natic has quit [Remote host closed the connection]
pankajgodbole has quit [Ping timeout: 245 seconds]
<flip214>
sjl_: that problem got solved in the new channel? or should we start keeping a note somewhere?
linack has joined #lisp
<flip214>
about the current channel name ... #clschool-2018-06-12-evening-in-EU
kajo has quit [Ping timeout: 248 seconds]
kajo has joined #lisp
<sjl_>
someone active has ops in #clschool, so spam can be combated successfully now
clothespin has joined #lisp
clothespin has quit [Remote host closed the connection]
rozenglass has joined #lisp
aautcsh has joined #lisp
Lycurgus has joined #lisp
bexx has joined #lisp
donotturnoff has quit [Remote host closed the connection]
aautcsh has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
donotturnoff has joined #lisp
josh5tone has quit [Ping timeout: 272 seconds]
themsay has quit [Read error: Connection reset by peer]
sauvin has quit [Ping timeout: 245 seconds]
themsay has joined #lisp
fortitude has joined #lisp
Grauwolf has quit [Remote host closed the connection]
Grauwolf has joined #lisp
varjag has joined #lisp
nanoz has joined #lisp
themsay has quit [Ping timeout: 268 seconds]
aautcsh has joined #lisp
pfdietz has joined #lisp
hatchback176 has quit [Remote host closed the connection]
simendsjo has joined #lisp
seann has joined #lisp
orivej has joined #lisp
_whitelogger has joined #lisp
themsay has joined #lisp
gravicappa has quit [Ping timeout: 245 seconds]
nanoz has quit [Ping timeout: 248 seconds]
themsay has quit [Read error: Connection reset by peer]
themsay has joined #lisp
ym has quit [Quit: Leaving]
v88m has quit [Read error: Connection reset by peer]
byronkatz is now known as awesomedood2015
simendsjo has quit [Remote host closed the connection]
simendsjo has joined #lisp
awesomedood2015 is now known as b6nvv0Rx
cosimone has joined #lisp
rippa has quit [Quit: {#`%${%&`+'${`%&NO CARRIER]
semz has quit [Quit: Leaving]
v88m has joined #lisp
cosimone has quit [Quit: WeeChat 2.4]
xkapastel has quit [Quit: Connection closed for inactivity]
itruslove has quit [Remote host closed the connection]
<Godel[m]>
Hi, in https://fare.livejournal.com/188429.html, the author claims that `… Racket keeps evolving, and not just "above" the base language, but importantly below. This alone makes it vastly superior to CL …`. What could it mean to evolve below the language? Is he referring to the subsets of racket that exist (functional, lazy)? (I haven't programmed in racket.)
manualcrank has quit [Quit: WeeChat 1.9.1]
manualcrank has joined #lisp
kajo has quit [Ping timeout: 248 seconds]
manualcrank has quit [Client Quit]
amerlyq has quit [Quit: amerlyq]
<Bike>
given the example of ffi, i would guess runtime type stuff
manualcrank has joined #lisp
<mfiano>
Also, he eventually moved to Gerbil Scheme, though I believe he is doing mostly ML these days.
simendsjo has quit [Ping timeout: 272 seconds]
<oni-on-ion>
Godel[m], perhaps something about racket swapping their guts to chez scheme recently
<Godel[m]>
Sorry, I don't understand what is runtime type stuff. I haven't dealt with racket at all.
<oni-on-ion>
its just a branding
<Bike>
stuff about the runtime. "runtime" is not a racket-specific term.
<jmercouris>
making a javascript engine must be quite the task...
<Xach>
My impression is that it's not too bad but making it "fast" is a pretty big challenge
<jmercouris>
Yeah, I bet, especially since speed is so crucial for JS
<Xach>
it helps if you have many multibillion-dollar companies working on it
<fortitude>
I'd think that trying to preserve all the strange type-conversion stuff would be quite the trick (c.f. all the "what the heck javascript" talks)
<Bike>
this spec looks like it's defining a virtual machine
<jmercouris>
I have been thinking a lot about web engines, and I have concluded that the web standard has increased so massively and javascript become so large partly as a way to increase the barrier to entry for new engines
<Bike>
hanlons's razor
vlatkoB has quit [Remote host closed the connection]
<jmercouris>
I think it is deliberate on the part of organizations such as Google...
<Bike>
and irrelevant, anyway
<jmercouris>
Well, not exactly, I have been considering implementing an engine in CL
<jmercouris>
I'm just thinking about all of the components, and what it would take to reasonably fund such a task
kajo has joined #lisp
<jmercouris>
maybe it is a fools errand...
<Bike>
the good thing about it being a virtual machine is that, in theory, you don't need to think like a person to implement it, just follow all the steps stupidly
<Bike>
the bad thing is god it just keeps going
<fortitude>
if you're doing a greenfield project, it might be easier to implement webassembly support and take somebody else's javascript-on-WA, assuming that exists
<oni-on-ion>
if parenscript was written in prolog, one could just swap the args, and get the inverse.
<oni-on-ion>
mAgIc
<jmercouris>
fortitude: a very salient observation
<jmercouris>
considering; "Embeddable, portable, compact: can run on platforms with 160kB flash and 64kB system RAM"
<Bike>
96801 lines
<jmercouris>
That is truly something...
<Bike>
so there's some scale for you
<Bike>
i mean, that's not how they develop it, obviously
<jmercouris>
well, not necessarily obviously
<Bike>
it's not how they develop it. if you look at the repo they use separate source files like regular human beings.
<Bike>
and the duktape.c begins "autogenerated".
<jmercouris>
oh I see
<Bike>
anyway there you go. should give you an idea of how hard a js engine is.
<jmercouris>
at any rate, I bet it could be far fewer lines of CL
<Bike>
the site also links several other "small" projects.
<jmercouris>
so probably 1/10 the size or so
<Bike>
and microsoft's "chakra core" and v8 are both open source, i just don't thinkt hey have ffiable apis
<jmercouris>
so its a 10k line project, so maybe about 6 months to a year of man-hours
<jmercouris>
just for the JS runtime
<jmercouris>
so I guess the minimum budget to develop a new web rendering engine must be at least 500,00.00 USD
<jmercouris>
s/500,00.00/500,000.00
<Bike>
also, that's one designed for low memory usage rather than speed. it probably doesn't jit and stuff.
alexanderbarbosa has joined #lisp
<jmercouris>
Bike: can you explain JIT?
<jmercouris>
or rather as an open question, can anyone in the channel explain to me JIT? I don't know much about compilers
<White_Flame>
of course, that example is in C, and thus has to implement their own GC, while in Lisp you can simply use the native one
<White_Flame>
I don't think javascript is all _that_ difficult to implement in a naive approach
<White_Flame>
parsing the source text is probably going to be one of the more finnicky things to do
libertyprime has quit [Ping timeout: 245 seconds]
<Bike>
i'm sure it's not difficult. it's just that there's a lot of it.
<Bike>
jmercouris: it means compiling things at runtime, generally.
<Bike>
sometimes patching existing functions is also implied
<fortitude>
jmercouris: at a very high level, JITing is essentially using an interpreted chunk of code until you've collected som stats about how it is actually being used, then replace it with a compiled version that takes advantage of those stats
<White_Flame>
(and then recompile if those expected stats are violated later)
<jmercouris>
I wonder why you would do that?
<White_Flame>
performance
<jmercouris>
obviously, but in which ways?
<Bike>
because interpreting code is generally slower than letting your machine do it
<White_Flame>
you can do things like inline virtual functions, if you can at runtime resolve & assume its type will be consistent
<jmercouris>
okay so by compilation you mean "turning into byte code"
<jmercouris>
rather than being processed by the VM
<Bike>
or machine code.
didi has left #lisp ["O bella ciao bella ciao bella ciao, ciao, ciao."]
<White_Flame>
modern JS engines have multiple stages of bytecode and asm code
<White_Flame>
they tradeoff quick-to-start execution in more naive bytecode environments, with slow-to-start optimized machine language outputs
<fortitude>
jmercouris: the stats you collect allow you to optimize the compiled code for how it's actually being used, instead of how you assume it might be used in some hypothetical future
<fortitude>
the tradeoff is having to wait a bit before you can actually do that (warmup times turn out to be pretty variable in some systems)
<jmercouris>
fortitude: Ok, I see
<White_Flame>
in fact, the entire raison d'état for webasm was to decrease the startup latency of loading & parsing
<jmercouris>
could this be classified as a very primitive form of "machine" learning?
<jmercouris>
as the VM runs it makes better and more informed decisions about how to compile snippets?
<Bike>
do we have to classify it as machine learning
<jmercouris>
its trendy
<White_Flame>
although it doesn't make its own new auto-classifications; it simply routes the optimization paths the authors have provided
<fortitude>
thing is the optimizations are local and ephemeral (you don't save them in a "smarter" vm that you use the next time you start your browser up)
<White_Flame>
and many of the JIT heuristics are incredibly simple, merely figuring out what the actual concrete type of a variable usually is
<White_Flame>
from which a lot of general constant propagation style optimization can happen
<White_Flame>
and that's just basically a histogram
<jmercouris>
I don't see how the above is related to histograms
<White_Flame>
a histogram is a count of occurrences. X axis is which type a variable happened to be during a pass, Y axis is the count
<White_Flame>
so it was a null twice, and a Number 3 million times, it optimizes for Number
seann has left #lisp [#lisp]
<White_Flame>
*if it was
<White_Flame>
there's usually a count of times a section of code has been run, and when that hits a threshold, it attempts to optimize it further (and coined the name "hotspot")
<White_Flame>
using the histogram metrics of what the variable types were
ebrasca has quit [Remote host closed the connection]
<White_Flame>
the JVM and various JS engines are really fascinating to look into. There's a ton of money & talent thrown at making those faster (regardless of the language quality that runs on them)
<jmercouris>
what a shame, that money could be spent to heat dumpsters in the NY winters and it would be put to better use
chipolux has joined #lisp
mindCrime_ has quit [Ping timeout: 248 seconds]
Arcaelyx has joined #lisp
xkapastel has joined #lisp
dacoda has joined #lisp
<aeth>
jmercouris: The real issue is why are those programmers being paid to work on JS engines and not CL.
linack has quit [Quit: Leaving]
<jmercouris>
aeth: because the world is a cruel and unfair place
<jmercouris>
maybe we should be asking ourselves: "ask not what your programming language of choice can do for you, but what can you do for your programming language of choice"
Bike has quit []
kini has quit [Remote host closed the connection]
dacoda has quit [Remote host closed the connection]
dacoda has joined #lisp
<bexx>
i'm doing an assignment in which I need to download the atoms in the max depth of a list to the succesive upper level
<bexx>
but i can't
<no-defun-allowed>
download?
<bexx>
sorry I don't know the english word
<bexx>
downgrade?
LiamH has quit [Quit: Leaving.]
<bexx>
let me show you what I build
<no-defun-allowed>
could you provide an example? i'm guessing something like ((a b c)) -> (a b c)?
<bexx>
yeah that is
<bexx>
if (a (b c)) -> (a b c)
<bexx>
(a (b c (d e))) -> (a (b c de ))
<bexx>
the max depth is joining the anterior level
<jmercouris>
ok
<jmercouris>
so only the deepest elements need to go up one level?
<no-defun-allowed>
hm, do just the innermost lists get splatted?
<bexx>
jmercouris: that's correct
<bexx>
I want to understand the idea
<jmercouris>
bexx: what you can do is represent the structure as a tree
<bexx>
ok
<jmercouris>
then you can find the leaf with the greatest distance to the root node
<bexx>
and then?
<jmercouris>
this is the leaf that needs to be merged into the parent node
<bexx>
I need to go one by one?
<bexx>
I was thinking of something like ,@
<jmercouris>
well I would make a tree structure ffrom the list, and then use a depth first search
<bexx>
but I can't find a function to do that
<no-defun-allowed>
append?
<jmercouris>
oh, you are allowed to do it in Lisp?
<no-defun-allowed>
(append '(a b c) '(d e)) -> '(a b c d e)
<grewal>
jmercouris: My guess is that bexx has already written a flatten-one-level funtion and a get-deepest-leaves funtion. Most *good* teachers tend to walk you through difficult problems
<pjb>
bexx: deepth of an atom is 0, deepth of a list of atom is 1. deepth of a list of lists of atoms is 2. This is the before deeepest lists you want to process.
<White_Flame>
bexx: btw, #clschool is good for asking beginner questions
<White_Flame>
but you can certainly finish here
<pjb>
(I take flatten from a library such as com.informatimago.common-lisp.cesarum.list:flatten but you would have to write it for the exercice probably).
<jmercouris>
grewal: I didn't think of that
<pjb>
bexx: note that once you have a working solution, you may notice that it is very inefficient, and you might want to optimize it. This will make it more obscure…
<jmercouris>
that makes sense though, it would seem a complex problem for a new student
b6nvv0Rx has quit [Remote host closed the connection]
varjag has quit [Ping timeout: 248 seconds]
sjl_ has quit [Quit: WeeChat 2.3-dev]
<bexx>
pjb: your solution is really simple!
<bexx>
pjb: I think that i'm overcomplicating things
<jmercouris>
the solution is very syntactically simple, but conceptually difficult, in my opinion
<pjb>
bexx: the thing is that you must write first a working solution. and only once you have a working solution, you should think about optimizing it.
<bexx>
yeah i'm just trying to get a solution
<jmercouris>
I still vote for making a tree...
<bexx>
later i can think of the optimization
dacoda has quit [Ping timeout: 248 seconds]
<jmercouris>
a tree is the most natural representation of this data structure
<bexx>
jmercouris: why are you going to make a tree?
<jmercouris>
because you will have to traverse the structure a few times, and conceptually it is simpler to traverse a tree
<jmercouris>
well not a few times, but in the best case 1 time, in the worst case 2 times
<jmercouris>
you have to know where within the structure the deepest nodes lie
<bexx>
jmercouris: and how do i get the list again?
<pjb>
bexx: so it's a good idea to introduce functional abstration over your representation to clearly manipulate the tree. The function deepth does that. My function flatten-before-deepest doesn't abstract the notion of tree, it's a little magical for this.
<bexx>
pjb: i'm trying to understand the magic of the case
<bexx>
pjb: why if (deepth list) is 2 i need to flatten the list?
<grewal>
pjb: Why do you keep using the word 'deepth'? Is it a technical term I'm unaware of?
<bexx>
grewal: no, pjb just called that way a function
<grewal>
bexx: You should think about that question a little longer before asking that it.
<pjb>
(deepth 'a) #| --> 0 |# no flatten.
<pjb>
(deepth '(a b c)) #| --> 1 |# no need to flatten it anymore.
<pjb>
(deepth '(a (b) c)) #| --> 2 |# hit.
<bexx>
pjb: but if the input is more deep?
<pjb>
grewal: if a is a leaf, (a b c) is a tree of deepth 1, with 3 childrens, a, b and c.
<pjb>
grewal: (a (b) c) is a tree of deepth 2, with 3 childrens, two leaves a and c, and the subtree of deepth 1 (b).
<pjb>
bexx: then we process the children recursively, in the following mapcar.
<pjb>
bexx: so the magic, is that we don't explicitely say that the list is a representation of a label-less tree node, giving the list of children of the node.
<pjb>
bexx: we could rewrite it with functional abstraction that would make this clear.
<pjb>
See the usenet article linked above.
<jmercouris>
and probably should, because it is unlikely a new user will understand fully/deeply a solution of that nature
<grewal>
pjb: My question really was 'why use "deepth" instead of "depth"'? And your answer basically is deepth is (1+ depth)?
<pjb>
grewal: bexx: without the explicit functional abstraction, the magic trick is that I don't show that I've recognized that it was a problem on trees, and given a solution that manipulates implicitely a tree.
<pjb>
We see lists, but it's actually tree nodes.
<pjb>
grewal: oh, I'm not native so I may make spelling mistakes.
<pjb>
Don't make anything of it.
<bexx>
pjb: is really great
<bexx>
pjb: i'm trying hard to understand it
<jmercouris>
bexx: perhaps you'd better spend your time implementing a tree abstraction instead
<jmercouris>
I don't think that it is a solution that the professor imagined or intended
<jmercouris>
and part of the exercise is probably for you to understand that nested lists are trees
<bexx>
jmercouris: yeah i'm reading the gigamonkeys chapter that you posted
<jmercouris>
probably I imagine the nexts lessons will be on homiconicity or something and manipulation of ASTs
v0|d has joined #lisp
<grewal>
pjb: You usually don't make such mistakes. It's sometimes hard to tell what's an accident and what's intentional
rpg has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Bike has joined #lisp
kini has joined #lisp
<bexx>
pjb: I think that your solution is not correct
<bexx>
pjb: (flatten-before-deepest '(a (b (c) d) e ((f (e a d) g) h i) j))
<bexx>
pjb: gets -> (A (B C D) E ((F E A D G) H I) J)
<bexx>
so in this example I can't flatten (b (c) d)
<bexx>
because the maximum deepest level is (e a d)
<pjb>
so in terms of the tree-methods indicated in that usenet post, your trees would be abstracted as: https://pastebin.com/uESF4B2s
<pjb>
bexx: good point. Then when the depth is 1, you should call flatten directly.
<pjb>
Do you mean that you don't want to flatten (b (c) d)?
<bexx>
i'm thinking that the maximum level of depth is for (e a d)
<sukaeto>
Godel[m]: if I were in a snarky mood, my response would be "Has Racket ever been used for anything outside of teaching undergrads how to program?"
<bexx>
(c) in this case is in the 3 level
jmercouris has quit [Remote host closed the connection]
<bexx>
(e a d) is in level 4
<pjb>
bexx: yes. My function considers the subtrees independently.
<bexx>
so i only need to downgrade (e a d)
<pjb>
bexx: if you wanted to consider the whole tree, you would have to write it differently.
<bexx>
yeah i'm trying
polezaivsani has quit [Quit: ERC (IRC client for Emacs 26.2)]
zaquest has quit [Remote host closed the connection]
<pjb>
bexx: notice that you can easily change the representation of the trees, without changing any of your code. You just need to substitute the set of tree methods in *default-tree-interpretation*
<pjb>
For example, you could add labels to your tree nodes, or use a representation with CLOS objects for the nodes, etc.
<pjb>
If you restrict yourself to binary trees, you could use cons cells instead of lists.
<pjb>
or hash-tables for the nodes, when the label is multi-valued.
<pjb>
All your tree processing code would still be unchanged.
<pjb>
bexx: notice also that you could just use CLOS and generic methods, but since we are dealing here with interpretations of the same class, namely the CONS class, we would have to wrap our lists in CLOS abstractions. Ie. define different CLOS classes for the different interpretations of our lists. tree-make-tree would return then CLOS instances instead of our lists.