<Bike>
sb-ext defines a string-to-octets function and doesn't want to let you redefine it.
<asarch>
What should I do?
<no-defun-allowed>
something went very very wrong there
Bike has quit [Quit: Lost terminal]
Kundry_Wag has joined #lisp
<asarch>
If I already have installed some packages with QuickLisp and SBCL, is it ok to load other packages with CLISP? I mean, both actually save the package information in $HOME/.cache/common-lisp
dale has quit [Quit: dale]
<no-defun-allowed>
you'll be good, only cached fasls live there
Kundry_Wag has quit [Ping timeout: 240 seconds]
<no-defun-allowed>
iirc they sit in something like sbcl-1.X.Y inside there
mathrick has joined #lisp
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Read error: Connection reset by peer]
pierpal has quit [Read error: Connection reset by peer]
Kundry_Wag has joined #lisp
kajo has quit [Ping timeout: 240 seconds]
Kundry_Wag has quit [Ping timeout: 244 seconds]
kajo has joined #lisp
slyrus has quit [Quit: slyrus]
slyrus1 is now known as slyrus
iomonad has quit [Ping timeout: 272 seconds]
<rtypo>
1/disconnec
rtypo has quit [Quit: WeeChat 2.2]
Kundry_Wag has joined #lisp
impulse has joined #lisp
Kundry_Wag has quit [Ping timeout: 252 seconds]
<asarch>
Thank you
<asarch>
Thank you very much
<asarch>
Let's try with CLISP!
Kundry_Wag has joined #lisp
<MichaelRaskin>
Xach: jcowan: I think there was a talk on ELS about automating use of rename-package — it was described as a mostly sufficient escape hatch in practice.
Arcaelyx has quit [Read error: Connection reset by peer]
Kundry_Wag has quit [Ping timeout: 252 seconds]
Kundry_Wag has joined #lisp
<MichaelRaskin>
I am not sure I understand the question; maybe ',(wrap-1 macro-arg)
Kundry_Wag has quit [Ping timeout: 240 seconds]
<beach>
I am afraid that doesn't work. Unknown variable macro-arg.
<mange>
I'm also not sure I understand the question, but ',(wrap-1 'macro-arg) would be closer.
<mange>
I'm not sure if it needs to be ',,(wrap-1 'macro-arg) though.
<beach>
What you first suggested does not give the same result.
<nirved>
shouldn't it be ,@form ?
<beach>
mange: Your second one gives a "comma not inside backquote". If I replace the ' by `, then the result is not the same.
<beach>
nirved: Why?
<nirved>
beach: nvm
Kundry_Wag has joined #lisp
zxcvz has joined #lisp
orivej has joined #lisp
mingus has quit [Read error: Connection reset by peer]
mingus has joined #lisp
Kundry_Wag has quit [Ping timeout: 246 seconds]
cobax has joined #lisp
Arcaelyx has joined #lisp
<mange>
I'm not sure it will be possible. You need to pass a value into #'wrap-1 that will unquote itself in the expansion. Can you change wrap-1?
<mange>
Or, alternatively, can you rely on wrap-1 to always do a simple wrapping like it currently does?
<beach>
I don't want to change wrap-1.
shrdlu68 has joined #lisp
<beach>
These are condensed examples of something much more complex, so in reality wrap-1 does much more and I want to reuse it if possible.
<MichaelRaskin>
If wrap-1 is given as a function, it is undistinguishable from (defun wrap-1 (form) (list 'bar form)))
<mange>
Can you make a wrap-1 macro? Or make a macro that just calls wrap-1 immediately?
astalla has joined #lisp
<no-defun-allowed>
how should i go about writing an async event loop?
<no-defun-allowed>
i want to write an async client for cl-decentralise which will register listener functions on cl-d channels whenever a certain message is received
<no-defun-allowed>
*whenever a message on that channel is received
<jackdaniel>
beach: I don't understand this question
<jackdaniel>
what do you mean by "remains the same"?
Balooga_ has quit [Quit: Balooga_]
Kundry_Wag has joined #lisp
<beach>
I want to write a function wrap-3 so that if I type (wrap-3 'some-form) I get the same result as if I type (wrap-2 'some-form), but instead of mentioning BAR explicitly in the body of the function, I want wrap-3 to call wrap-1 to obtain the same result.
<MichaelRaskin>
How portable you want that to be?
<jackdaniel>
thank you
<beach>
MichaelRaskin: What? What are you hinting?
<MichaelRaskin>
Because ,(list (first '`a) (wrap-1 (second '`,macro-arg))) happens to work on SBCL
Kundry_Wag has quit [Ping timeout: 252 seconds]
<beach>
What is a?
<MichaelRaskin>
Random symbol
<MichaelRaskin>
Only needed for introspection of how ` works
<beach>
Oh, I see.
<MichaelRaskin>
clisp seems to accept that abomination, too
<beach>
Well, nice try, but I don't think I'll go with it. :)
scymtym has joined #lisp
<beach>
Anyway, thanks everyone. It appears that any solution will be more complicated than just repeating the body of the wrap-1 function inside wrap-3.
<MichaelRaskin>
You can also just put a wrap-1 call there, which will give a different expansion but the same functionality
<mange>
If you're willing to have wrap-1 be (defun wrap-1 (form) ``(bar ,,form)) then I think you can do it, but it will mean that other calls need to have an extra quote.
<MichaelRaskin>
But yeah, implementation is free to treat `form as _any_ form that evaluates to the correct result
<jackdaniel>
mange: as I understand it wrap-1 may have arbitrary expansion, this is just an example
<mange>
jackdaniel: Yeah, but the approach of "add an extra layer of quasiquotes" may be able to be applied more generally.
<beach>
mange: I'll think about that.
Kundry_Wag has joined #lisp
<MichaelRaskin>
Actually, just putting unadorned (wrap-1 macro-arg) should be a viable strategy
orivej has quit [Ping timeout: 252 seconds]
Arcaelyx has quit [Read error: Connection reset by peer]
<beach>
I don't see it.
<beach>
mange: So how would the call look in that case?
Kundry_Wag has quit [Ping timeout: 245 seconds]
<MichaelRaskin>
Well, local macro will try to expand and call wrap-1 directly
<beach>
MichaelRaskin: I'm lost.
<MichaelRaskin>
Well, this macrolet is generated to use foo macro inside form, right?
<beach>
Yes.
<MichaelRaskin>
What you are asking is how to inline wrap-1 inside wrap-3
<MichaelRaskin>
Instead, you could just do a call to wrap-1 instead of inlining
<beach>
Not just that. I think I can do it if the expansion of wrap-3 is allowed to contain a call to wrap-1. But I do want the immediate output of wrap-3 to be the same as that of wrap-2.
<jackdaniel>
(list 'foo boo) is the same as `(foo ,boo) ; no?
<MichaelRaskin>
I think it is provably impossible without modifying wrap-1. There are multiple posible read results of `, and if you want your code to look the same as if ` was written, input-output relation of wrap-1 is not wnough
<MichaelRaskin>
For CCL it is not even a theoretical concern
<beach>
jackdaniel: Doesn't that assume that there is no nesting inside the result of wrap-1?
<jackdaniel>
hm, maptree in that case
<mange>
You could rewrite the logic of quasiquote to make that work, but it's not fun.
orivej has joined #lisp
<mange>
Although I guess this is a restricted case that is easier.
<mange>
Splicing is the real pain, so you do nicely avoid that.
<jackdaniel>
yes, such assumption is in this snippet
Kundry_Wag has joined #lisp
<beach>
Anyway, I have several ideas now. Thanks to everyone. Time to go do something else for a while.
frgo has joined #lisp
<no-defun-allowed>
bye
<no-defun-allowed>
have fun beach
Kundry_Wag has quit [Ping timeout: 264 seconds]
astalla has quit [Ping timeout: 252 seconds]
Arcaelyx has joined #lisp
nowhere_man has joined #lisp
<shka_>
good day
Kundry_Wag has joined #lisp
Arcaelyx has quit [Read error: Connection reset by peer]
<no-defun-allowed>
hi shka_
<shrdlu68>
shka_: Hello
Kundry_Wag has quit [Ping timeout: 272 seconds]
mange has quit [Remote host closed the connection]
Arcaelyx has joined #lisp
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
graphene has quit [Remote host closed the connection]
graphene has joined #lisp
trittweiler has quit [Remote host closed the connection]
sixbitslacker has joined #lisp
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
zfree has joined #lisp
oni-on-ion has quit [Ping timeout: 245 seconds]
trittweiler has joined #lisp
Kundry_Wag has joined #lisp
shrdlu68 has quit [Ping timeout: 252 seconds]
Arcaelyx has quit [Read error: Connection reset by peer]
graphene has quit [Read error: Connection reset by peer]
Kundry_Wag has quit [Ping timeout: 240 seconds]
Kundry_Wag has joined #lisp
graphene has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
Kundry_Wag has joined #lisp
fikka has joined #lisp
Arcaelyx has joined #lisp
Kundry_Wag has quit [Ping timeout: 272 seconds]
<beach>
Hmm. CALL-METHOD and MAKE-METHOD are some of the most complicated macros (or rather forms that wrap some other forms in those macros) I have ever attempted to write. In case anybody wondered, that's what the exercise was about.
<shka_>
hm
<shka_>
interesting
Arcaelyx has quit [Read error: Connection reset by peer]
cl-arthur has quit [Ping timeout: 272 seconds]
Arcaelyx has joined #lisp
marvin2 has quit [Ping timeout: 250 seconds]
adam4567 has joined #lisp
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Read error: Connection reset by peer]
smokeink has quit [Remote host closed the connection]
smokeink has joined #lisp
smokeink has quit [Remote host closed the connection]
cl-arthur has joined #lisp
smokeink has joined #lisp
smokeink has quit [Remote host closed the connection]
<no-defun-allowed>
i finished my channel implementation for cl-decentralise which is nice
<j`ey>
beach: does cleavir provide all of the CLHS?
Arcaelyx has quit [Read error: Connection reset by peer]
Kundry_Wag has joined #lisp
graphene has quit [Remote host closed the connection]
graphene has joined #lisp
Kundry_Wag has quit [Read error: Connection reset by peer]
zfree has quit [Read error: Connection reset by peer]
zfree has joined #lisp
Arcaelyx has joined #lisp
acolarh has quit [Ping timeout: 246 seconds]
acolarh has joined #lisp
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 245 seconds]
<beach>
j`ey: Er, no. It doesn't provide anything except code to compile any Common Lisp form.
<j`ey>
Oh
<beach>
SICL, on the other hand has as a goal to be a fully conforming Common Lisp implementation. But it is not finished yet, so there is still some code missing.
<j`ey>
beach: does Cleavir have textual output? Do you have a really basic example I could look at
Kundry_Wag has joined #lisp
<beach>
What kind of textual output are you referring to? Currently it generates a graph of intermediate code that client systems can then translate to LLVM or assembler or whatever.
<no-defun-allowed>
a(n optional) legend might help make sense of the graph
<beach>
Read or write a datum.
<no-defun-allowed>
i see
<beach>
no-defun-allowed: Come on. This is a picture of a tool to view IR with. The person using that tool knows perfectly well what it means.
<no-defun-allowed>
fair enough
<beach>
no-defun-allowed: I wasn't about to make a special picture for j`ey with a legend in it.
<jackdaniel>
I expect that the guy with cigarette on the photo knows perfectly well what these sheets and lines mean :)
<no-defun-allowed>
never mind then
<no-defun-allowed>
it's just a little more interesting and complicated than LLVM IR graphs
<j`ey>
beach: I wouldn't expect it!
<beach>
Good.
<no-defun-allowed>
oh i got it now
<shrdlu68>
I'm trying to optimize some code which subseqs a simple-bit-vector a lot, and it appears that subseq'ing conses much less that using displaced bit vectors.
<beach>
If the bit-vectors are short, then that is plausible.
<beach>
A displaced vector must set up a lot of information that needs to be stored somewhere.
Kundry_Wag has joined #lisp
<beach>
j`ey: Let me know if you have any other questions.
<Shinmera>
small bit vectors can be like two words
<j`ey>
beach: I think I just need to read a bit more about CL compilation in general
<shrdlu68>
The bit-vectors are of length 1-240.
<shka_>
shrdlu68: in your case, bit-vectors are bad fit
<shka_>
just go for integers
<Shinmera>
right, so the contents fit into a word, meaning the overhead of a copy is going to be very small
<beach>
j`ey: I think you may have a hard time finding that kind of information. There is Lisp in Small Pieces, but you won't find IR graphs and stuff in it.
<Shinmera>
but as shka_ mentions, just using ldb and integers is probably even better
m00natic has joined #lisp
<shrdlu68>
shka_: I'm keeping that in mind, eventually I will try it . Right now I'm trying a scheme where I sxhash the substrings, converting them to fixnums.
<j`ey>
beach: what I was really thinking about was macros from CLHS, which I assume cleavir does have to implement
<beach>
Nope, they are supplied by the client.
<shka_>
that won't be super fast
<shka_>
but you may try it
* no-defun-allowed
looks for a copy of LiSP
<beach>
j`ey: Which is fortunate, because there is not standardized expansion of standard macros.
<no-defun-allowed>
ah yes, a random server with 40mb pdf! very reliable.
Kundry_Wag has quit [Ping timeout: 246 seconds]
Kundry_Wag has joined #lisp
marvin2 has joined #lisp
esrse has quit [Ping timeout: 246 seconds]
<shrdlu68>
shka_: Indeed it isn't. Using (mod (sxhash substring) (expt 2 24)) as the indices of an array, it's about two seconds faster than the hash-table implementation. It consumes much less memory, though.
<beach>
j`ey: Cleavir sees a macro in the current environment. It calls the macro function, giving it the form and the environment. It then compiles the resulting form instead.
<beach>
j`ey: That's all Cleavir needs to do about macros.
<shka_>
shrdlu68: well, 32 is not random number of children in the node, though
<shka_>
you can apply bitmask compression to it
<shka_>
it should get better this way
<beach>
j`ey: I don't remember the defmacro from yesterday. Compilation takes place in an environment that the client defines. That environment must contain definitions of every macro that is used in the code to be compiled.
<shka_>
shrdlu68: you have ldb, logcount and the world is yours
<shka_>
:-)
fikka has quit [Ping timeout: 240 seconds]
<shrdlu68>
shka_: (disregarding collisions for now)
<beach>
j`ey: You can do the following experiment. In a SLIME REPL, type (macro-function 'with-output-to-string)
<beach>
j`ey: The client (SBCL or whatever) already has a definition of that macro. Cleavir just works with that.
Kundry_Wag has quit [Ping timeout: 252 seconds]
<beach>
j`ey: Or try this: (funcall (macro-function 'when) '(when x y z) nil)
<beach>
j`ey: That is basically what Cleavir does,
<beach>
j`ey: Then it compiles the IF instead. Now IF it has to know how to compile, because that's a special operator.
fikka has joined #lisp
Arcaelyx has quit [Read error: Connection reset by peer]
<j`ey>
beach: ok, so the client will setup an environment that contains all the macros that CLHS has declared?
<beach>
Correct. And all the macros it needs for the code to be compiled.
orivej has quit [Ping timeout: 250 seconds]
<j`ey>
still a little unclear how defmacro works. cleavir seems a (defmacro blah..) and calls the defmacro macro from the environment. does that update the environment to include the 'blah' macro?
makomo has joined #lisp
<beach>
I can't understand the "cleavir seems a ..." part. But yes, DEFMACRO updates the environment so that it includes the definition of that macro.
<beach>
Then it is not a conforming Common Lisp iplementation.
<beach>
implementation, even.
<j`ey>
'JSCL is and will be a subset of Common Lisp.' :(
Arcaelyx has joined #lisp
FreeBirdLjj has joined #lisp
FreeBirdLjj has quit [Ping timeout: 252 seconds]
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
kooga has joined #lisp
Arcaelyx has quit [Read error: Connection reset by peer]
quipa has joined #lisp
dddddd has joined #lisp
Arcaelyx has joined #lisp
pierpal has joined #lisp
beach has quit [Ping timeout: 250 seconds]
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 245 seconds]
Arcaelyx has quit [Read error: Connection reset by peer]
quipa has quit [Read error: Connection reset by peer]
quipa has joined #lisp
orivej has joined #lisp
Arcaelyx has joined #lisp
gector has quit [Read error: Connection reset by peer]
beach has joined #lisp
gector has joined #lisp
orivej has quit [Ping timeout: 245 seconds]
<beach>
So I think I am incapable of writing and debugging these MAKE-METHOD and CALL-METHOD wrappers without also having a version of macroexpand-all available.
SenasOzys has joined #lisp
makomo has quit [Read error: Connection reset by peer]
Kundry_Wag has joined #lisp
Arcaelyx has quit [Read error: Connection reset by peer]
<MichaelRaskin>
(shameless plug) beach: well, there is agnostic-lizard:macroexpand-all
<beach>
Yes, I know. It was talked about yesterday.
Kundry_Wag has quit [Ping timeout: 240 seconds]
<beach>
Let me take this opportunity to ask how you handle that special case...
Kundry_Wag has joined #lisp
<beach>
I assume you use your own representation of lexical environments, right?
<beach>
Or do you adapt to those of the client?
Arcaelyx has joined #lisp
<MichaelRaskin>
That's true, and if you start from the top-level/null lexical environments I only use my own implementation of lexical environments
<MichaelRaskin>
Which is, to be honest, quite limited, because I only need rough idea of what-means-what.
<beach>
So how do you deal with the possibility of the client version of MACROEXPAND being called with one of your environments?
<MichaelRaskin>
The client version of macroexpand cannot be called with one of my environments.
<beach>
Oh?
<beach>
Why not?
<MichaelRaskin>
Because my implementation's type is a class I define. And macroexpand wants implementation-specific environment object.
<beach>
It suffices to have a macro expanded in a lexical environment that calls macroexpand.
iomonad has left #lisp ["[1] 28823 segmentation fault (core dumped) weechat"]
iomonad has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
<MichaelRaskin>
If you want to macroexpand-all in a complicated lexical environment, you need one of the few tricks.
<beach>
So you are not handling that case?
makomo has joined #lisp
<MichaelRaskin>
I am handling it best-effort as the standard doesn't give me enough tools
<MichaelRaskin>
Option 1: macro-based macroexpand-all is sometimes enough.
<MichaelRaskin>
Option 2: I can take the lexical environment object and a list of names I should check for macro definitions in that object.
<MichaelRaskin>
Option 3: if the local macros are used in a simple enough way, my heuristics might be enough.
<beach>
OK, I'm lost again. I don't know what a "macro-based macroexpand-all" is. And I don't know whether by "I can ..." you mean that this is something you could implement in the future, something I need to do in order to use your tool, or something else.
<MichaelRaskin>
Let me look up the API names…
<beach>
I am sorry, I seem to be having this problem of understanding what is said in IRC. You need to be more explicit for me to understand.
<MichaelRaskin>
Well, I did omit a ton of details, that's true.
<beach>
So is the answer to my question "It sometimes works and sometimes doesn't"?
<MichaelRaskin>
macro-macroexpand-all on a form will replace a form with some weird code that returns hthe full macroexpansion and should always be enough to take into account the local lexical environment
<beach>
OK, so it always works.
<MichaelRaskin>
(macroexpand-all form environment :names names) should also always work as long as all local names are listed in «names» parameter (listing extra is not a problem)
<MichaelRaskin>
Of course, it is possible you find a bug, in this case I will thank you and upload an updated version.
<beach>
OK.
mkolenda has quit [Remote host closed the connection]
<beach>
I think I'll go with Cleavir anyway, because I need to expand macros that I defined in one of my first-class global environments. And I can't do it in the host, because they are macros that clash with those of the host, like DEFMETHOD.
mkolenda has joined #lisp
<beach>
In fact, I think I wrote a macroexpand-all at some point. It might have bitrotted.
nsrahmad has joined #lisp
<MichaelRaskin>
Well, yes, if you control the environment implementation it makes a lot more sense to use it.
cl-arthur has quit [Ping timeout: 244 seconds]
<beach>
Yes, and in this case, like I said, I pretty much have to. Unless I rename all my macros just for the purpose of testing them.
<MichaelRaskin>
I think compilation should effectively contain macroexpand-all
<beach>
Yes, it does.
<beach>
It returns an AST.
<beach>
But the AST might be tough to understand.
<beach>
... unless I write an AST visualizer like I did for HIR.
<beach>
That might be the way to go, actually.
<beach>
Such a tool is needed anyway.
<MichaelRaskin>
AST-to-src could be enough in this specific case
<MichaelRaskin>
(which is also a useful tool)
cl-arthur has joined #lisp
cl-arthur has quit [Client Quit]
cl-arthur has joined #lisp
<beach>
True. That one might have bitrotted as well.
Bike has joined #lisp
schweers has joined #lisp
Kundry_Wag has joined #lisp
pierpal has quit [Quit: Poof]
pierpal has joined #lisp
Kundry_Wag has quit [Ping timeout: 264 seconds]
<beach>
It seems to work still.
<Bike>
cleavir ast graphviz should work too
<Bike>
was usually harder for me to understand than ir, though, because it's based on an expression mostly-but-not-actually-a-tree instead of a cfg
fikka has quit [Ping timeout: 252 seconds]
Arcaelyx has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<beach>
I see.
nowhere_man has quit [Ping timeout: 272 seconds]
<beach>
The output of AST-to-source is difficult to parse, even for a very small expression.
<beach>
I started with (loop for i from 0 to 10 collect i) and that was a big mistake.
<beach>
150 lines of code from AST-to-source. :)
fikka has joined #lisp
<heisig>
The example is nice though, it has blocks, bindings, tagbody, conditionals, type annotations and mutation of local variables. But it is definitely not a small expression :)
<beach>
Right you are.
<beach>
The question I am asking myself now is whether an AST viewer would make it easier to understand.
<beach>
There are several obstacles: There is generous nesting of PROGNs.
<beach>
LET is expanded to calls to a local function.
<beach>
There are plenty of LOAD-TIME-VALUEs (at least in SICL code) for fetching global function cells.
<beach>
It might be easier to work on macroexpand-all after all.
<heisig>
Another possibility would be to introduce a pattern-based simplifier into AST-to-source, e.g., to eliminate superfluous PROGNs, convert direct lambda calls to LETs and so on.
<heisig>
Not sure whether that is worth trying.
<beach>
So many possibilities.
<beach>
All these tools (existing and suggested) could be useful, so they should all be written. :)
<beach>
Let me start with macroexpand-all. I think it may be trivial, given that we have GENERATE-AST.
<Bike>
using generate-ast for things other than generating asts is going to be some work
nsrahmad has quit [Remote host closed the connection]
<pfdietz>
and put it in your quicklisp/local-projects
DGASAU has joined #lisp
<dim>
pfdietz: pgloader build process should check out directly from QL, so I'm surprised; what might have happened is that the OP had a cache of quicklisp from a previous pgloader build
<dim>
also I don't have this problem, my pgloader users do
<dim>
I know nothing of the env where they try to use it, they know nothing of CL and QL and other bits, and it's all fine, usually
jkordani_ has joined #lisp
Arcaelyx has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Kundry_Wag has joined #lisp
mindCrime_ has joined #lisp
cl-arthur has quit [Quit: Lost terminal]
Kundry_Wag has quit [Ping timeout: 246 seconds]
Kundry_Wag has joined #lisp
Arcaelyx has joined #lisp
vlatkoB has quit [Read error: No route to host]
dyelar has joined #lisp
vlatkoB has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
xrash has joined #lisp
dmiles has quit [Read error: Connection reset by peer]
Kundry_Wag has joined #lisp
dale_ has joined #lisp
Kundry_Wag has quit [Read error: Connection reset by peer]
dale_ is now known as dale
Kundry_Wag has joined #lisp
<pfdietz>
The problem here is that a widely used library is depending on unexported internal details of sbcl. Perhaps this means sbcl should have some sort of exported interface that the library could use instead, but sbcl developers are always going to feel free to change things others don't have license to depend on.
<pfdietz>
Any time one builds something that depends on unexported things, one is living in sin, at least to some extent.
dmiles has joined #lisp
graphene has quit [Remote host closed the connection]
astalla has joined #lisp
Kundry_Wag has quit [Ping timeout: 252 seconds]
graphene has joined #lisp
eschulte has joined #lisp
<Shinmera>
I am a sinner
equwal has quit [Remote host closed the connection]
equwal has joined #lisp
FreeBirdLjj has joined #lisp
Arcaelyx has quit [Read error: Connection reset by peer]
Arcaelyx_ has joined #lisp
<jackdaniel>
everytime you use :: god kills a kitten.
* j`ey
writes lots of C++ with :: :(
<beach>
Bike: Oh, that's not what I meant. I meant that I could copy generate-ast and modify it to expand instead.
Kaisyu has quit [Quit: Connection closed for inactivity]
<Bike>
oh.
equwal has quit [Remote host closed the connection]
equwal has joined #lisp
equwal has quit [Remote host closed the connection]
suskeyhose has joined #lisp
jkordani has quit [Quit: Leaving]
heisig has quit [Quit: Leaving]
equwal has joined #lisp
<shrdlu68>
How do I inline a (defun (setf foo) ...)
<Bike>
like that.
<Bike>
the first parameter is the new value.
<Bike>
the remaining parameters are what appears after foo in the setf.
Arcaelyx_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
lavaflow_ has quit [Read error: Connection reset by peer]
warweasle has joined #lisp
<shrdlu68>
thanks.
<pfdietz>
Also, you declare it inline right before the defun.
<pfdietz>
(declaim (inline (setf foo)))
astalla has quit [Ping timeout: 252 seconds]
<pfdietz>
If you want to selectively inline it later, you declaim notinline right after the defun, then at local uses you can locally declare it inline.
equwal has quit [Remote host closed the connection]
<pfdietz>
The initial inline declaim causes the compiler to store the source for the later selective inlining.
<shrdlu68>
Got it.
scymtym has quit [Ping timeout: 240 seconds]
lavaflow_ has joined #lisp
equwal has joined #lisp
equwal has quit [Remote host closed the connection]
equwal has joined #lisp
fikka has quit [Ping timeout: 246 seconds]
Arcaelyx has joined #lisp
equwal has quit [Remote host closed the connection]
equwal has joined #lisp
equwal has quit [Remote host closed the connection]
equwal has joined #lisp
Kundry_Wag has joined #lisp
equwal has quit [Remote host closed the connection]
fikka has joined #lisp
equwal has joined #lisp
equwal has quit [Remote host closed the connection]
equwal has joined #lisp
equwal has quit [Remote host closed the connection]
<shrdlu68>
First macro I've written in a while: (defmacro post-incf (var) `(prog1 ,var (incf ,var)))
equwal has joined #lisp
equwal has quit [Remote host closed the connection]
Kundry_Wag has quit [Ping timeout: 244 seconds]
equwal has joined #lisp
zfree has quit [Quit: zfree]
fikka has quit [Ping timeout: 240 seconds]
<trittweiler>
shrdlu68, (shiftf x (1+ x))
<pfdietz>
That macro will fail if the place has side effects. (post-incf (aref x (incf i)))
<j`ey>
the classic C macro problem
<pfdietz>
There's infrastructure in the CL standard for solving this problem, since it has to be solved for various builtin macros on places.
fikka has joined #lisp
<Bike>
`(1- (incf ,var)), obvs
mingus has quit [Ping timeout: 252 seconds]
<dlowe>
you might want (defmacro post-incf (var &optional amt) ...)
oni-on-ion has joined #lisp
<dlowe>
I don't think side-effects matter here. incf won't be happy with a non-place.
Kundry_Wag has joined #lisp
groovy2shoes has quit [Quit: moritura te salutat]
<shrdlu68>
dlowe: Yeah, this is just a simple 1+
<pfdietz>
See the example I gave: the place expression can have side effecting subexpressions.
fikka has quit [Ping timeout: 240 seconds]
lavaflow_ has quit [Ping timeout: 246 seconds]
<dlowe>
pfdietz: yeah. ugly.
Chream has joined #lisp
lavaflow_ has joined #lisp
<pfdietz>
One also has to make sure the subexpressions of the place continue to be executed in left-to-right order. ansi-tests checked for all this.
Kundry_Wag has quit [Ping timeout: 246 seconds]
<shrdlu68>
pfdietz: Hmm, shiftf suffers from the same shortcoming. Is there a way of avoiding that?
graphene has quit [Remote host closed the connection]
Kundry_Wag has joined #lisp
rippa has joined #lisp
graphene has joined #lisp
lavaflow_ has quit [Read error: No route to host]
Kundry_Wag has quit [Ping timeout: 244 seconds]
lavaflow_ has joined #lisp
rixard has joined #lisp
fikka has joined #lisp
Kundry_Wag has joined #lisp
shrdlu68 has quit [Ping timeout: 245 seconds]
Kundry_Wag has quit [Read error: Connection reset by peer]
rixard has quit [Quit: rixard]
<beach>
I believe I have a first version of MACROEXPAND-ALL working. It takes a cleavir environment that it operates in.
Chream has quit [Remote host closed the connection]
<beach>
Tomorrow, I will use it to test my implementations of MAKE-METHOD and CALL-METHOD.
cage_ has joined #lisp
Jesin has quit [Quit: Leaving]
<makomo>
pfdietz: do the subforms of a place *have* to be evaluated left-to-right? i thought that was just a nice property/convention, but that it wasn't required for user-defined places
<makomo>
pfdietz: for example, i wrote an IFF place, which conditionally writes/reads one of the two places you provide to it as arguments, along with a condition form
nika has joined #lisp
<Bike>
' The evaluation ordering of subforms within a place is determined by the order specified by the second value returned by get-setf-expansion. For all places defined by this specification (e.g., getf, ldb, ...), this order of evaluation is left-to-right. '
<Bike>
i guess that kind of implies having one that doesn't is okay.
<makomo>
yup
<Bike>
if a bit confusing, but iff is already confusing, so it's probably fine
<makomo>
:-)
<makomo>
i was going to write condf and other conditional places by basing them on iff, but i have to make iff take an optional second place first
<makomo>
whenf, unlessf, etc. etc. :-D
fikka has quit [Ping timeout: 244 seconds]
<makomo>
but i went on to do something else and forgot
<Bike>
loopf is gonna be off the chain
fikka has joined #lisp
<makomo>
holy, good idea :D
FreeBirdLjj has quit [Remote host closed the connection]
Kundry_Wag has joined #lisp
meepdeew has joined #lisp
Kundry_Wag has quit [Ping timeout: 245 seconds]
FreeBirdLjj has joined #lisp
Arcaelyx has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Kundry_Wag has joined #lisp
scymtym has joined #lisp
akovalenko has quit [Quit: ERC (IRC client for Emacs 27.0.50)]
Kundry_Wag has quit [Ping timeout: 244 seconds]
<beach>
Oh, so less than 3 hours to write MACROEXPAND-ALL. Not too bad.
meepdeew has quit [Remote host closed the connection]
rumbler31 has joined #lisp
<slyrus1>
beach: with a UI that lets you expand/collapse subtrees? :)
<beach>
No, I just wrote an S-expression-based MACROEXPAND-ALL.
<beach>
I'll do the GUI AST visualizer some other time.
<slyrus1>
right. I was going to say "presumably the AST stuff makes it possible to write such a thing nicely"?
<beach>
Sure. I did it the easy way. I basically translated the code from generate-ast (by hand) to cover all the cases.
rumbler31 has quit [Remote host closed the connection]
Jesin has joined #lisp
meepdeew has joined #lisp
Kundry_Wag has joined #lisp
LiamH has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
groovy2shoes has joined #lisp
Kundry_Wag has joined #lisp
mason has joined #lisp
mingus has joined #lisp
zigpaw has quit [Remote host closed the connection]
schweers has quit [Ping timeout: 264 seconds]
Kundry_Wag has quit [Ping timeout: 240 seconds]
<Demosthenex>
simple testing for a noob. lisp-unit or prove?
<beach>
I have quit using testing frameworks.
<Shinmera>
parachute!
<beach>
They don't help me with my favorite testing technique it seems.
<beach>
Enumerating test cases is mostly not feasible for the kind of stuff I write.
<Demosthenex>
i'm just trying to confirm some edge cases in what i'm working on, and honestly i could just repeat things in the repl, but thought i should document it a bit better
<Demosthenex>
i'm not writing a suite
<beach>
Oh!
<beach>
I do.
<Demosthenex>
... maybe something like test driven devel? i'm trying to cover my bases
chipolux has quit [Quit: chipolux]
<beach>
But I don't use a framework.
chipolux has joined #lisp
<Demosthenex>
i thought i should try, in case it grows.
<beach>
An automatic test suite is a very good thing.
<beach>
It makes you feel very confident when you modify some code.
Kundry_Wag has joined #lisp
<Demosthenex>
Shinmera: i'll read up on parachute
<Shinmera>
Colleen: look up parachute
<Shinmera>
sigh, colleen broke again
<Demosthenex>
Shinmera: i already have the docs up ;]
<Demosthenex>
there's just a ton of frameworks on the wiki
meepdeew has quit [Remote host closed the connection]
Kundry_Wag has quit [Read error: Connection reset by peer]
Kundry_Wag has joined #lisp
<makomo>
beach: what's your favorite testing technique?
<beach>
makomo: When possible, I write "random tests", i.e. I generate huge numbers of test cases to my code. when applicable, I write two version of my code, one "production" version, and one "trivial" version.
<beach>
makomo: The are supposed to behave the same way, but the "trivial" one is too slow for the final version.
<beach>
makomo: If the two agree, then I am very confident that they are both correct.
terpri has quit [Quit: Leaving]
<makomo>
beach: oh i see. how do you generate the random tests without basically solving the problem though? how do you bootstrap, i guess? :-)
quipa has quit [Remote host closed the connection]
<beach>
Well, this works best for "abstract data type" code.
<beach>
makomo: I just generate sequences of operations according to the API.
<Shinmera>
the lookup module got stuck somehow so now it was catching up on past commands
Kundry_Wag has quit [Ping timeout: 245 seconds]
<beach>
makomo: Then I check the tests against the code cover. If there are places that have not been executed, I try to modify my random-test generator so that those are included.
emaczen has quit [Remote host closed the connection]
<makomo>
beach: hmm, i see how one could easily generate sequences of those operations, but how do you generate the results to test against, without already implementing the API? maybe i'm thinking about it the wrong way or something
Kundry_Wag has joined #lisp
orivej has quit [Ping timeout: 252 seconds]
<makomo>
or perhaps you rely on an already existing implementation of the same API (or a similar one)
<beach>
I do implement the API in two different ways.
<beach>
One "production" and one "trivial".
<makomo>
oh, i thought that was a separate thing. aha.
<makomo>
so you test the production version with the trivial one?
Kundry_Wag has quit [Read error: Connection reset by peer]
<beach>
Yes.
Kundry_Wag has joined #lisp
<beach>
There could be bugs in the trivial one as well of course, but the probability that it would be "the same" bug in both versions is infinitesimal.
<makomo>
so you're relying on the fact that the trivial version is as trivial as can be, and that you will probably easily catch the errors since the code is simple
<beach>
... since they are implemented in totally different ways.
<beach>
That, and what I just wrote.
<makomo>
mhm, neat.
<j`ey>
beach: that's interesting
<makomo>
so, (1) simple foundation and (2) redundancy :-)
<beach>
Right.
<beach>
But yeah, the "trivial" version is usually so simple that you can just look at it and be convinced that it is correct.
m00natic has quit [Ping timeout: 272 seconds]
<j`ey>
but then you have to maintain 2 version of the code!
shka_ has joined #lisp
<beach>
The "trivial" version stays the same unless the API changes, which should not be the case (at least not very often).
<shka_>
good evening
Kundry_Wag has quit [Ping timeout: 240 seconds]
<beach>
Hello shka_.
emaczen has joined #lisp
Arcaelyx has joined #lisp
igemnace has quit [Quit: WeeChat 2.2]
FreeBirdLjj has quit [Remote host closed the connection]
<pfdietz>
makomo: there are two commonly used techniques for generating random tests.
pierpal has quit [Quit: Poof]
<pfdietz>
The first is generative: have a grammar (explicit or implicit) and attach probabilities to different production rules.
Jesin has quit [Quit: Leaving]
pierpal has joined #lisp
<pfdietz>
The second is mutational: given a corpus of interesting inputs, generate new inputs by changing or combining them in various ways.
<pfdietz>
One can also add constraints to the inputs to try to bias them toward more interesting executions.
<pfdietz>
And if one can instrument the code under test then the inputs can be tweaked based on whether in the code the executions go ("gray box fuzzing").
<beach>
pfdietz: I often need my generated operations to follow a Markov process. Unless I have a certain probability of generating long sequences of "the same" operation, my coverage won't be adequate.
Jesin has joined #lisp
<pfdietz>
Yes. There are games you can play with that, like "swarm testing", where you randomly prune down the set of production rules before each run of the test generator. Empirically this tends to find more bugs. I used this technique in the CL random test generator.
<beach>
Great!
<beach>
I just generate the tests with a trivial Markov state machine.
<pfdietz>
The experience with random testing is, I think, that different test generators cover different parts of bug space, so for a really large system you want diversity.
<pfdietz>
For something small, not so important.
<beach>
That's why I don't use a test generator.
<beach>
I just generate the tests.
<pfdietz>
That's a test generator :)
<beach>
But I write it for each system.
<beach>
I started by saying why I don't use a testing framework.
<makomo>
pfdietz: interesting, but you do arrive at the same "problem" of having to already have a working version of what you want to test, right? i.e. once you generate the inputs (sequence of operations, etc.) you need to somehow generate the outputs (the actual solution to those inputs)
<pfdietz>
You identify properties the code should have, even if they may not fully specify what the code should do.
<pfdietz>
For example: the code probably shouldn't crash.
<beach>
makomo: I can assure you that the "working version" is trivial to write in many cases.
<pfdietz>
For the Lisp testing I was doing, the property was that (for conforming CL) the code should do the same thing if compiled w. various different declarations, or if eval-ed.
<makomo>
beach: mhm, but it's interesting how there's a fundamental circularity/bootstrapping problem in there. either you have an existing implementation or you use your brain (which is just another implementation) and write out a finite number of tests yourself
<pfdietz>
If your code is loaded with assertions, each of them is an opportunity for testing. No assertions should fail.
<beach>
makomo: OK, let me give you a concrete example: Cluffer.
<makomo>
mhm, i'm familiar with that project
<beach>
makomo: It's basically a two-level editable sequence.
<beach>
makomo: I can write a trivial version as (say) a list of lists.
Kundry_Wag has joined #lisp
<makomo>
pfdietz: true. how did you test that the two versions of code did the same thing? i suppose the stuff that was randomly generated were the declarations, not the code itself? the code was predetermined and you knew what to test for?
<beach>
makomo: but the purpose of the library is to be highly optimized.
<beach>
makomo: So the trivial version is, well, trivial, but slow.
<beach>
I can write it i a few minutes.
<makomo>
beach: yeah, i understand that. i get that it's a non-problem in practice, but fundamentally the ""problem"" is there
<pfdietz>
I randomly generated code and randomly generated inputs, and then checked that no errors were thrown (this was conforming code that would not throw errors) and that it had the same result on the same inputs.
<beach>
It takes less time to write the implementation of the trivial version than it takes to write the tests.
<beach>
makomo: You mean if I want to generalize my ideas into a "testing framework"?
<makomo>
pfdietz: oh i see, interesting
<beach>
makomo: I think I already expressed my feelings about that.
<beach>
makomo: I.e., that I think it is extremely hard to find abstractions for a testing framework.
<makomo>
beach: not quite. i just wanted to comment on the need of having to have an existing implementation in order to begin testing, no matter how trivial it is
<beach>
That trivial implementation is tiny compared to the markov-chain code.
<makomo>
beach: does Cleavir for example use this technique for testing?
<beach>
Consider it part of the amount of code that you need to write to test things.
<makomo>
i suppose Cluffer does since you mentioned it?
Kundry_Wag has quit [Ping timeout: 244 seconds]
<beach>
Yes, Cluffer is a good example.
<beach>
makomo: My main point here is that I don't believe in "testing frameworks" because I don't see how such a thing could capture even the most useful testing techniques that I know of.
Essadon has joined #lisp
fikka has quit [Ping timeout: 252 seconds]
<makomo>
beach: mhm
Arcaelyx has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pjb has joined #lisp
eschulte has quit [Ping timeout: 272 seconds]
stacksmith has joined #lisp
<shka_>
beach: that's exactly why i like prove
<shka_>
it does not get in the way…
<beach>
shka_: Whenever you succeed, I am all ears.
<shka_>
hardly succeed
fikka has joined #lisp
Arcaelyx has joined #lisp
<beach>
My purpose in life is not to make testing frameworks a sucess.
<beach>
success, even
<shka_>
but i like how i can use asdf extension to specify :test-file and i can simply use is and ok assertion
<j`ey>
this is my testing framework: (defun test (val) (cond ((eq val nil) (princ "E")) (t (princ "."))))
<shka_>
it is not magical, but it just helps to launch my tests
<pfdietz>
The question you want to ask is not "which testing framework?" but rather "what specifically do you want from a testing framework?"
<shka_>
yeah
<beach>
Thanks pfdietz.
<shka_>
well, i like prove
<shka_>
it does what i want it to do, and does not get in the way
<beach>
And whenever I asked that question, no existing framework raised its hand.
<shka_>
well, you expect a lot
<shka_>
something that can be perhaps even borderline impossible in fact
fikka has quit [Ping timeout: 252 seconds]
sauvin has quit [Remote host closed the connection]
<shka_>
in fact precise description of computer program is non trivial in it's own right
<beach>
shka_: Oh, I am convince it is impossible.
<beach>
shka_: Which is why I don't believe in testing frameworks.
quipa has quit [Ping timeout: 272 seconds]
<shka_>
well, does it has to be everything or nothing?
<beach>
I am totally convince that this is one abstraction that everyone is looking for but that isn't possible in general.
<beach>
No, of course not.
<shka_>
right
<beach>
But the fact that there are so many and that people are not satisfied with what exists, tells me that there is a problem.
rumbler31 has joined #lisp
* dlowe
is satisfied. :D
<beach>
dlowe: Good for you.
vydd has joined #lisp
fikka has joined #lisp
<beach>
But why do we have so many testing frameworks?
<shka_>
they are easy to write
<shka_>
that's why
<beach>
Very plausible. Thanks.
<pfdietz>
There are ten copies of RT in Quicklisp, last I checked. Not only are they easy to write, they're easy to copy. :)
rumbler31 has quit [Ping timeout: 246 seconds]
<beach>
What is RT?
<pfdietz>
Very old testing framework written by Waters.
vlatkoB has quit [Remote host closed the connection]
<pfdietz>
Yes! I did make some changes to it for myself.
<jackdaniel>
testing is not an easy task so many people have different ideas how the tool simplifying that should look like
<jasom>
beach: there are so many testing frameworks for the same reason I accidentally wrote a routing library for clack
<jasom>
beach: at first I didn't really need a routing library, so I didn't use one. Then I added a couple of features as I needed them. Then I realized that if I just moved it to a separate package, I had a complete routing library...
<pjb>
jackdaniel: right. IME, testing depends on the kind of software you are testing. Basically, most testing libraries are just unsuitable for a lot of software I had to test.
meepdeew has joined #lisp
<pjb>
perhaps it would be more worthwhile to write down specifications, eg. of testing framework interfaces to testing report systems. eg. https://testanything.org
<pjb>
(while it works, I find TAP very primitive…)
<pjb>
If asdf:test-op was specified to produce on *standard-output* a TAP stream, quicklisp could parse the results and produce nice reports…
<pjb>
You could use any testing framework, even multiple frameworks in a big system, as long as they would all follow a unique protocol, collected and interpreted by the surrounding tools and UI, it would be nice, integrated, and useful.
<pjb>
and consistent.
meepdeew has quit [Ping timeout: 246 seconds]
nika has quit [Quit: Leaving...]
th1nkpad has joined #lisp
dan64 has joined #lisp
thinkpad has quit [Ping timeout: 272 seconds]
th1nkpad is now known as thinkpad
fikka has joined #lisp
zooey has quit [Remote host closed the connection]
zooey has joined #lisp
LiamH has quit [Quit: Leaving.]
Kundry_Wag has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
asarch has joined #lisp
<vsync>
I've gotten really into &aux lately... like in the last couple of days
<vsync>
before I wanted to use it just because it was out of fashion and I'm a weirdo even among the weirdos
<vsync>
never had an actual use though... but lately I have found one where you want to have an inner named closure
<vsync>
lets you avoid spurious LET nesting and keep scopes tidy
<asarch>
In the AllegroServe Tutorial, there is this expression:
<vsync>
and makes the control flow of the main function a little clearer I think
<asarch>
SBCL complains: "The name "EXCL" does not designate any package."
<asarch>
Old Lisp code?
<pfdietz>
It's Allegro-specific? Or, it's defined in some system you needed to load first?
<vsync>
jasom: LOL what I'm working on right now is literally a routing library
<Bike>
i think it is the name of the extensions package for allegro.
<vsync>
though mine is for healthcare provider coordination data exchange between health systems and EMRs
<vsync>
for that purpose but generalizable! just like embedded in many other systems I'm sure, heh
<pjb>
vsync: the real use case for &aux, is when you write a macro and you need to add variables around a &body docstring-declarations-and-body <- NOT merely BODY!!!
<pjb>
Then &aux let you avoid parsing the body.
<vsync>
ooh
* vsync
will try to remember that
<vsync>
try and remember to parse in more detail later rather... under the gun right now
<pjb>
I don't mind indentation. It's done automatically, and I've got a 5000+ pixel wide screen :-)
<vsync>
wc -L **/*.lisp | tail -1 => '79 total'
* vsync
takes a bow
<pjb>
I assume your wc -L prints the max width of lines?
<shka_>
vsync: good
dan64 has quit [Ping timeout: 244 seconds]
<pjb>
yes, gnu wc does that.
<pjb>
You've got a tidy code base here. It must be horrible to read in some places…
<dlowe>
1326 total
<vsync>
yeah I stumbled across it the other day when I didn't have CLOC installed and until my clean kernel compile happens the system crashes if I access a reiserfs and then a jfs
<dlowe>
hmmm
<dlowe>
(I think that may be a data file)
<stylewarning>
I mind indentation! My terminal is only 132 chars wide
<shka_>
it is nice to display two columns of code in emacs though
<shka_>
or 3
<vsync>
I aim for 80 but 100 is okay if necessary, depending on context; 132 is my drop-dead
<pfdietz>
Replace all your "rn" substrings with "m" to save space. This is called "keming".
<shka_>
lol
pierpal has quit [Ping timeout: 272 seconds]
<pjb>
Well the max line length is not good. You'd have to look at the typical line length. My files contain outliers in comments or in data. Sometimes in code, but rarely.
fikka has quit [Ping timeout: 246 seconds]
jinkies has joined #lisp
wiselord has joined #lisp
fikka has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
fikka has joined #lisp
regreg has joined #lisp
Kundry_Wag has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
pierpa has joined #lisp
cage_ has quit [Quit: Leaving]
Kundry_Wag has quit [Ping timeout: 240 seconds]
igemnace has joined #lisp
fikka has joined #lisp
shka_ has quit [Ping timeout: 240 seconds]
Kundry_Wag has joined #lisp
SenasOzys has quit [Remote host closed the connection]
<emaczen>
xb
fikka has quit [Ping timeout: 245 seconds]
shifty has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
SenasOzys has joined #lisp
fikka has joined #lisp
warweasle has quit [Quit: rcirc on GNU Emacs 24.4.1]
fikka has quit [Ping timeout: 252 seconds]
<makomo>
ugh, i don't like it when i have to use composite keys for hash tables, it feels kinda clunky
<makomo>
and then again, multiple layers of hash tables is also clunky
Kundry_Wag has joined #lisp
SenasOzys has quit [Remote host closed the connection]
fikka has joined #lisp
Kundry_Wag has quit [Ping timeout: 272 seconds]
Kundry_Wag has joined #lisp
fikka has quit [Ping timeout: 240 seconds]
Kundry_Wag has quit [Ping timeout: 245 seconds]
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 244 seconds]
graphene has quit [Remote host closed the connection]
graphene has joined #lisp
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 244 seconds]
fikka has joined #lisp
dented42 has joined #lisp
fikka has quit [Ping timeout: 252 seconds]
rippa has quit [Quit: {#`%${%&`+'${`%&NO CARRIER]
fikka has joined #lisp
Kundry_Wag has joined #lisp
stacksmith has quit [Remote host closed the connection]
eschulte has joined #lisp
Kundry_Wag has quit [Ping timeout: 244 seconds]
<no-defun-allowed>
pfdietz: it's keming
<pfdietz>
Uh huh
atgreen has joined #lisp
Roy_Fokker has joined #lisp
ebrasca has quit [Remote host closed the connection]
<aeth>
Does anyone run CL on a processor with many cores? AMD has some 16 core / 32 thread and even (for servers) 32 core / 64 thread.
oni-on-ion has quit [Read error: No route to host]
<Bike>
sure,i guess
<no-defun-allowed>
I run it on 6c/12t.
<aeth>
that might count
<no-defun-allowed>
That's above average last time I checked.
<aeth>
I'm just wondering how programs are written with > 10 threads
<aeth>
As in, how it affects the architecture.
<pjb>
why 10?
<no-defun-allowed>
I usually use lparallel and bordeaux-threads for stuff so I'm not paying much attention, sorry.
<pjb>
I've got 8 cores. wouldn't the question be about >8 threads?
jkordani has joined #lisp
<aeth>
pjb: 4 cores / 8 threads has been normal for a very long time.
<no-defun-allowed>
I tell lparallel I want 13 workers even with the new brainfuck scheduler.
<aeth>
pjb: 20+ is very new, at least in x86-64
<jasom>
aeth: threads predate parallelism by a lot on PCs. In the 90s a lot of code was written multithreaded to avoid io blocking.
<aeth>
It was just a handful of very expensive Xeons until recently.
<pjb>
Anyways, don't confusing multiprocessing and multithreading.
<aeth>
By cores / threads I'm referring to SMT (Hyperthreading in Intel's terminology)
<no-defun-allowed>
cl-decentralise spawns a thread per connection since they actually do things in SBCL instead of splitting up one interpreter's time (cough CPython cough).
ebrasca has joined #lisp
jkordani_ has quit [Ping timeout: 246 seconds]
aindilis has quit [Ping timeout: 272 seconds]
<no-defun-allowed>
cl-vep uses one generator and a lock for the generator. When a worker needs a new frame it holds the lock and funcalls the generator. This method (work "source" and lock) hasn't given me issues in the past.
<makomo>
concurrency terminology is tricky, and people love to abuse it
<no-defun-allowed>
I've done high thread counts to avoid http blocking too. That method scaled pretty well up to 64 threads.
<no-defun-allowed>
However, the ffmpeg interface for cl-vep is shit slow and only puts out 4fps and only makes a load of 1.5.
<jasom>
aeth: I just spawn more worker threads. I haven't had any issue at 16 worker threads on my 8/16 ryzen
<aeth>
jasom: anyway, what I mean is things that are architectured so that they'd benefit from e.g. if AMD released a 64 core CPU tomorrow.
<no-defun-allowed>
lparallel would still probably work.
graphene has quit [Remote host closed the connection]
<jasom>
aeth: minimize the serial work, and don't contend for shared resources.
<makomo>
i.e. conc-name-option::= :conc-name | (:conc-name) | (:conc-name conc-name)
<Bike>
defstruct is weird.
Jesin has quit [Quit: Leaving]
<makomo>
why would you want just :conc-name? is that supposed to construct (within the context of the grammar) a non-list version of (:conc-name conc-name)?
<Bike>
:conc-name is equivalent to (:conc-name)
<aeth>
What technique do people use for non-consing interthread communication?
<Bike>
"A defstruct option can be either a keyword or a list of a keyword and arguments for that keyword; specifying the keyword by itself is equivalent to specifying a list consisting of the keyword and no arguments."
Kundry_Wag has joined #lisp
<makomo>
Bike: guess i should have read further, but it confused me right away so i didn't bother :c. thanks
<jasom>
aeth: lparallel has bounded queues implemented with vectors; I haven't checked the code, but it ought-not be consing if implemented right.
<cgay>
:conc-name lol
<aeth>
jasom: I'm guessing you'd queue or pass messages with things that are unboxed (like fixnums) or with keywords/symbols
<jasom>
aeth: setf on an array is going to be non-consing. Creating the value to setf in that locatin *might* be consing
Kundry_Wag has quit [Ping timeout: 244 seconds]
<aeth>
jasom: thus the restriction for it to be e.g. an (unsigned-byte 32) array
<jasom>
aeth: I
Arcaelyx has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<jasom>
aeth: I'm pretty sure an unspecialized array will not cons if you do something like (setf (aref foo) 3) since 3 is a fixnum
<aeth>
Would you sync with timers or would you have a thread sleep when done until it receives a start-working-again call?
<jasom>
aeth: synchronizing using clocks is hard to get right, I'd avoid it all together
Kundry_Wag has joined #lisp
Kundry_Wag has quit [Ping timeout: 240 seconds]
<aeth>
Oh, there *is* a Threadripper 32 core (64 thread) CPU, the 2990WX. Wikipedia just put it in a separate chart much further down because it's Zen+.
Kundry_Wag has joined #lisp
<aeth>
So there is a chance that a (rich) Lisper is already running an application with performance on architecture close to manycore in mind (if you count the 64 threads via SMT)
<aeth>
Only $1799
<jasom>
aeth: Talos is available with 128T (or maybe even more) configurations since POWER8 is 4T per core
<aeth>
jasom: Yes, but most CL implementations work best on x86-64 at the moment.
<jasom>
2x 22-core POWER9 is the highest configuration which translates to 196 threads I think?
<aeth>
Is PPC compatible with POWER8? I'm not sure if any CL runs on POWER8 if not. SBCL seems to have PPC and PPC64.
Kundry_Wag has quit [Ping timeout: 245 seconds]
<jasom>
POWER9 is PPC64, but I think the talos is designed to run in little-endian mode, and I don't know if SBCL does that.
<aeth>
PPC64 is marked as in progress on the SBCL download page. I wouldn't be surprised if threading doesn't work
<jasom>
so multiprocess then :)
<aeth>
I wonder if there's a 2 socket AMD Threadripper mobo. That would be 2x 2x 32 at a max (right now... expect it to double next gen), so 128.
<jasom>
and the IBM E980 can support up to 16 12 core SMT8 processors for more cores than I care to do the math on
<jasom>
aeth: there is 2S EPYC, not sure about threadripper
<makomo>
is it possible to remove proclamations, concretely, an ftype proclamation?
<aeth>
jasom: Epyc has the same cores at a max, but 2x to 4x the cost :-p
Bike has quit [Ping timeout: 252 seconds]
<aeth>
jasom: But yes, IBM's POWER is definitely worth mentioning in this kind of discussion. Last I checked, IBM seems to be keeping up with Intel, which is 10ish cores behind AMD.
<aeth>
(That's probably one of the ways they get people to pay 2x to 4x as much as the consumer line.)
graphene has quit [Remote host closed the connection]
kooga has quit [Quit: :]
graphene has joined #lisp
anewuser has joined #lisp
Jessin has joined #lisp
Essadon has quit [Quit: Qutting]
Jessin is now known as Jesin
meepdeew has joined #lisp
<Demosthenex>
is there a way to start sbcl with swank using a port number from the command line? i'm trying to work on a few different things on a remote host, and i can connect with ssh/port forwarding. just starting sbcl with swank on separate ports is awkward
kajo has quit [Quit: From my rotting body, flowers shall grow and I am in them and that is eternity. -- E. M.]
<aeth>
hmm, ECL also supports PPC.
kajo has joined #lisp
kajo has quit [Client Quit]
suskeyhose has quit [Remote host closed the connection]
stacksmith has joined #lisp
stacksmith has quit [Client Quit]
stacksmith has joined #lisp
kajo has joined #lisp
suskeyhose has joined #lisp
<stacksmith>
Good morning
<suskeyhose>
hello
wiselord has quit [Ping timeout: 264 seconds]
nirved has quit [Ping timeout: 240 seconds]
makomo has quit [Ping timeout: 272 seconds]
<Demosthenex>
jasom: love those POWER systems. i'm an aix consultant ;]
<Demosthenex>
i'd be thrilled to see customers using CL on POWER in production
makomo has joined #lisp
Bike has joined #lisp
regreg has quit [Remote host closed the connection]
<Bike>
it's just a program with parallelism. they're nice with preemptive multitasking too.
Kundry_Wag has quit [Ping timeout: 252 seconds]
<aeth>
Bike: But consider e.g. a game or similar application that can be divided into several systems. In today's world, if it's parallel at all, each major system would probably get its own thread. That'll only scale up to the number of major systems, which is probably far less than 90.
meepdeew has quit [Remote host closed the connection]
<Bike>
i mean running code that does one task in multiple processes.
<Bike>
like rendering, if you like games
<aeth>
yes
<Bike>
in that case whether you have five or ninety processes isn't important for basic architecture.
<aeth>
Except for the most part, afaik what you'd typically see when the end user has quad core is maybe putting one task on each a thread to be "parallel", not also subdividing those tasks.
<aeth>
(parallel in quotes because there are like 5 definitions at work)
<aeth>
(in this one conversation)
DGASAU has quit [Read error: Connection reset by peer]
<aeth>
AI on its own thread vs. AI in many threads.
<Bike>
if you already have an idea for how to structure a program for multiprocessing, what are you even asking?
fikka has joined #lisp
<Bike>
just use lparallel or whatever.
<aeth>
Bike: I'm wondering if anyone has successfully deployed a CL application that takes advantage of one of these CPUs with a "very high" core count. ("very high" being relative to the normal 2-4 cores that you would have seen for this kind of hardware until recently)
<Bike>
the machine i use to build clasp has thirty something cores. during build we have each core do a compile-file so we can compile a bunch of files at once.
DGASAU has joined #lisp
<Bike>
it's not anything new architecturally
<whartung>
I mainge someone one has. At those core counts, you’re getting into “super computing” kinds of designs and applications.
fikka has quit [Ping timeout: 252 seconds]
<whartung>
Funny Bike, back in the day, when I had the CPU meter on my desktop, I’d just kick off builds until it reach 90%. … by hand. Whenever it dipped down, I’d kick off another one.
Kundry_Wag has joined #lisp
<Bike>
yeah, pretty much goes like that.
<Bike>
i did the same thing in undergrad doing biology simulations
<whartung>
This was an accounting system :)
<whartung>
but, since it seemed to be going up, I guess I wasn’t I/O bound, so gogogo!
<whartung>
My friend laments how horrible Swift builds are on the Mac — it’s apparently very CPU intensive.
<no-defun-allowed>
they don't have much CPU to give
<no-defun-allowed>
should test it on a linux box
Kundry_Wag has quit [Ping timeout: 252 seconds]
dale has quit [Quit: dale]
<aeth>
Bike: I would personally probably just manually implement the required algorithms and data structures when needed in my case rather than using a library. Micromanaging this sort of thing is kind of a key point of the engine (when I get there), not a secondary feature.
<aeth>
i.e. I have as full control over the game loop as I can get.
Kundry_Wag has joined #lisp
<Bike>
okay.
<no-defun-allowed>
aeth: what kind of engine are you making?
<aeth>
It's mainly due to the requirements for 100 FPS logic and no consing at the library level in the game loop (I have consed in all of my substantial tests, of course... too hard not to... but that's in the user's code)
Arcaelyx has joined #lisp
<Bike>
"or something"
slyrus has quit [Quit: slyrus]
slyrus1 is now known as slyrus
<aeth>
no-defun-allowed: A 3D game engine. I switched my original design (nothing is stopping it from being general purpose, of course) from first person to third person strategy (i.e. far away camera) because of fewer physics requirements (otherwise the project would become more about writing a physics engine than a game engine)
Kundry_Wag has quit [Ping timeout: 272 seconds]
<no-defun-allowed>
i see
slyrus1 has joined #lisp
Kaisyu has joined #lisp
fikka has joined #lisp
<aeth>
I just need to do mouse selection of an entity and a destination and then I have arbitrary movement on a 2D plane within the 3D world (e.g. ships on water). After that (soundless) games would technically be possible, although still fairly difficult because the text rendering isn't complete yet.
<whartung>
what parts do you plan on forking out to separeate threads/processes?
Kundry_Wag has joined #lisp
<aeth>
Ideally, everything eventually.
<aeth>
I'm guessing the input and the final update of the authoritative game state would have to stay on a core thread.
<Demosthenex>
omfg, lparallel++.
<Demosthenex>
one line of wrapper (pmapc vs mapc) and i have all 8 cores doing inserts.
fikka has quit [Ping timeout: 244 seconds]
<whartung>
as I understand it, it’s prety difficult to multitask the (a?) rendering pipeline.
<whartung>
you can easily have a rendering thread, a state thread, sevearl AI threads. But the busy thread is typically the rendering thread.
Kundry_Wag has quit [Ping timeout: 252 seconds]
<Bike>
you don't really have the opportunity anyway since the graphics library wants to do rendering in mysterious ways
<whartung>
I don’t know anything about OpenGL and it’s ilk.
<jasom>
Bike: there are some tradeoffs you can make; e.g. make the serial performance much worse in trade for more parallelism; the number of cores you have can decide if you will make that tradeoff
<Bike>
mostly because rendering is so hard we've farmed it off to repurposed bitcoin-mining chips
<whartung>
well, they happen to be pretty good at the job. If they weren’t we wouldn’t be handing them off to it.
<Bike>
jasom: fine tuning, yeah
<whartung>
as I understand it, the games job is to get the models oriented properly, send them to the GPU, and make sure the proper textures are loaded at the same time. The GPU does the actual rendering stuff.
Pixel_Outlaw has joined #lisp
<whartung>
the game hanles all the camera things (I think)
<jasom>
aeth: my most recent webapp will probably scale well past 8 cores; all the state is in the database and each worker thread has its own DB connection. It's when you get past 32 cores that things can get sticky.
<whartung>
all working on the same transaction jasom ? or 8 different ones?
<jasom>
whartung: all different ones
<whartung>
one of our apps seems to have no problems saturating a 16 core machine right now.
<aeth>
whartung: the rendering is by far the heaviest part ime
<aeth>
even with the GPU doing most of the work!
<whartung>
but the gpu guys have a way of getting it to work on multiple GPUs (2 GPUs, alternatiing scan lines or something like that)
<whartung>
it’s a naturally statelss process as I understand it
<aeth>
tell that to the OpenGL designers
<whartung>
but now they have the ray tracing GPUs. OH MY do those demos look amazing.
<Bike>
the original opengl was described as a state machine.
<Bike>
they've moved away from that in fits and starts, thus resulting in the current mess
<whartung>
I thought you could gang 2 cards together
<Bike>
multi gpu, yes.
<aeth>
whartung: It's generally not worth it to have multiple GPUs. The game (or application) has to specifically support it (and probably support it differently for nvidia and AMD) and it's usually better just to get the next tier up in graphics cards so it probably only applies to you if you're using Titans.
<aeth>
For nvidia it's called SLI and the general advice on forums has been to not do it, at least for the past few years.
<whartung>
ah…too bad
<White_Flame>
whartung: different generations had different limitations. I know there were 3x linked card setups,a nd probably 4 as well
<White_Flame>
I think some also just send along the PCIe bus instead of having their own inter-card connectors
<White_Flame>
and now NVLink is going to be nVidia's going forward
<aeth>
but apparently nvidia just abandoned SLI on the next generation (that isn't out yet) in favor of "NVLink Bridge"
fikka has joined #lisp
<aeth>
I think Vulkan does things differently here.
<whartung>
ok
<whartung>
I’ve spent enough treasure on computer equipment, I’m glad I was never really in the GPU chase :)
<White_Flame>
nvidia is going off the deep end with prices anyway, best to ignore it for now, too
<aeth>
But when I was saying that rendering is heavy, I mean heavy on the CPU. The GPU is doing most of the work, but #'draw still seems to be the most CPU intensive function at the moment for me. I think Vulkan would allow this to be split up, though.
<no-defun-allowed>
how reflective can you make a lisp?
asarch has quit [Quit: Leaving]
<White_Flame>
no-defun-allowed: most lisps are fully reflective, if you use their implementation internals
<aeth>
White_Flame: You see that all over the place. CPUs, GPUs, smartphones, etc. The high end is improving, but also getting more expensive.
suskeyhose has quit [Remote host closed the connection]
<White_Flame>
aeth: RAM prices are settling down, though, finally returning to where they were 3 years ago or wherever it was before that spike
<whartung>
depends on how high powerful the polisher you’re using no-defun-allowed, and what kind of compound you’re using
fikka has quit [Ping timeout: 252 seconds]
<whartung>
no, I understand aeth. Just getting the frames set up. A lot of that is the modeling side (modeling the car(s) for example), and things like the physics engine. Is it worth while to farm our modeling the suspension dymanics of 40 cars in a nascar sim to different CPUs per frame? I dunno. it may not even be possible if you’re modeling drafting and what not.
<whartung>
ignoring things like contact
<no-defun-allowed>
i'm using uuuh alan kay's premium shiny program sauce
<whartung>
yea, his stuff can get pretty shiny.
<no-defun-allowed>
whartung: also it's made of pure cons oil
<aeth>
whartung: but on the other hand I think a large part of strategy games is embarassingly parallel, or close enough.
<aeth>
thousands of separate entities vs. one very detailed entity.
dmiles has quit [Ping timeout: 240 seconds]
<whartung>
yea it can be, quesiton is whether it’s cpu worthy enough to bother breaking it out.
it3ration has quit [Ping timeout: 240 seconds]
<aeth>
I've definitely played some building strategy games (games with workers doing tasks and carrying resources around a village) where it uses all the CPU