scymtym has quit [Remote host closed the connection]
nchambers has joined #lisp
angavrilov_ has joined #lisp
angavrilov has quit [Remote host closed the connection]
angavrilov_ has quit [Ping timeout: 246 seconds]
angavrilov_ has joined #lisp
Lycurgus has joined #lisp
SaganMan has quit [Ping timeout: 250 seconds]
orivej has quit [Ping timeout: 250 seconds]
nirved has quit [Killed (niven.freenode.net (Nickname regained by services))]
nirved has joined #lisp
jack_rabbit_ has joined #lisp
dddddd has quit [Ping timeout: 240 seconds]
dddddd has joined #lisp
Oddity has joined #lisp
quazimodo has joined #lisp
stardiviner has joined #lisp
khrbt has joined #lisp
mrcom has quit [Read error: Connection reset by peer]
Lycurgus has quit [Ping timeout: 272 seconds]
warweasle has joined #lisp
angavrilov_ has quit [Ping timeout: 240 seconds]
mrcom has joined #lisp
angavrilov_ has joined #lisp
mooch has quit [Ping timeout: 260 seconds]
SaganMan has joined #lisp
angavrilov_ has quit [Remote host closed the connection]
angavrilov_ has joined #lisp
jack_rabbit_ has quit [Ping timeout: 250 seconds]
angavrilov_ has quit [Ping timeout: 250 seconds]
angavrilov_ has joined #lisp
hectorhonn has joined #lisp
<hectorhonn>
morning everyone
rumbler31 has joined #lisp
Essadon has quit [Quit: Qutting]
slyrus1 has quit [Ping timeout: 245 seconds]
dvdmuckle has quit [Quit: Bouncer Surgery]
SaganMan has quit [Ping timeout: 244 seconds]
FreeBirdLjj has joined #lisp
dvdmuckle has joined #lisp
angavrilov_ has quit [Ping timeout: 250 seconds]
razzy has quit [Ping timeout: 246 seconds]
warweasle has quit [Quit: rcirc on GNU Emacs 24.4.1]
hectorhonn has quit [Ping timeout: 256 seconds]
asedeno has joined #lisp
<nitrix>
Hello, I have a probably unpopular question. It's my understanding that Lisp is more often requires runtime facilities to accomodate for programs that needs to self-modify and re-compile during their execution.
<Bike>
self modifying code is strongly discouraged.
<nitrix>
I've seen plenty of strategies, interpreters, JIT, etc. Could it also be entirely compiled or even transpiled without bringing additional runtime?
<Bike>
yes. the commercial lisp compilers exclude the compiler from their images.
<nitrix>
I just learned today that the macros are powerful enough that self-modifying is basically no longer a thing.
rumbler31 has quit [Remote host closed the connection]
<Bike>
it's actually pretty rare for implementations or user code to use the compiler at runtime. it's just available.
<nitrix>
Bike: I see. This is great news! :)
hectorhonn has joined #lisp
<nitrix>
Bike: That means that during compilation, a compiler could obviously treat the input programs as lists, run the macros just fine, but then eventually erase or optimise away the lists? Until now I was very worried that programs would have to also be represented as lists at runtime and incur the overheads associated with them.
<Bike>
what? no. it would be crazy inefficient to keep them as lists.
<nitrix>
(e.g. additional indirection, cache locality problems, a big switch case interpreter somewhere, etc)
<Bike>
i mean, and operate on them like that.
<Bike>
an implementation might save lists, but only as like, a source interrogation thing. the code is just native code.
FreeBirdLjj has quit [Remote host closed the connection]
<nitrix>
Bike: So like, it'd keep the AST just in case you decide to invoke said compiler at runtime, but otherwise, would use the executable code that was generated (and optimised) ahead-of-time?
<nitrix>
That's neat.
<Bike>
in all likelihood it wouldn't even keep the ast
<Bike>
if you compile a function that's already been compiled, it doesn't have to do anything
<nitrix>
How come the implementations I looked at are still quite far from C in terms of performance?
FreeBirdLjj has joined #lisp
<Bike>
the language semantics involve freer typing and less undefined behavior and weird arithmetic
angavrilov_ has joined #lisp
<Bike>
probably nothing relating to the presence or lack of a runtime compiler
<nitrix>
So, presumably, if one added some type inference or gave up some of the behavior/arithmetic safety, it wouldn't be unrealistic to expect something similar to C?
khisanth__ has quit [Ping timeout: 272 seconds]
<Bike>
yes. sbcl has a lot of type inference and can build very fast code if you know what you are doing.
<nitrix>
I know I sound a bit paranoid, but I explored a lot of languages in my career, C, Go, PHP, Haskell, for many years each and while I think List really has got something beautiful worth looking into, my main use for it would be to develop a game engine which would be very disappointing if the performance tanks.
<nitrix>
s/List/Lisp :P
<Bike>
there's people who do lisp for game development. i think the channel is #lispgames?
<nitrix>
o:
<nitrix>
Alright, that seems to cover all my concerns for now :)
<Bike>
i know aeth hangs around here and does a bunch of quaternion arithmetic or whatever
<Bike>
i think someone did an fps engine at some point
<nitrix>
So you can compile and optimize it, just like any language, but you have the benefits of a really good macro system because it works with your code as an AST instead of strings.
<Bike>
it's well within the realm of possibility
FreeBirdLjj has quit [Remote host closed the connection]
<nitrix>
Plus, if you really care, a runtime compiler, but at that point, I think
<nitrix>
I think I'd prefer dynamically loading/unloading libraries and patching the symbol table. Anyway.
<nitrix>
Cool stuff! :D
<nitrix>
Bike: Which lisp flavor and impl do you use?
<Bike>
sbcl, and i develop clasp, which is new.
jack_rabbit_ has joined #lisp
<nitrix>
Bike: What features of lisp requires GC?
<nitrix>
Is it just because of the free variables from lambda?
<Bike>
strictly speaking a gc is not required. you can just keep allocating memory until you die.
dale has joined #lisp
<Bike>
in C when you allocate memory (as with malloc), you are expected to manually indicate when you are done with that memory, so that the system can later give it to you for another allocation.
<no-defun-allowed>
you'd probably get better perf in a game than doing it in python or js for example, maybe on par with java
<nitrix>
I was hoping for a more practical answer, even if I understand that theorically, we can get away with a lot :P
<Bike>
in lisp this manual indication is not required.
<Bike>
the gc takes care of figuring out when you are done with memory for you.
<nitrix>
I understand manual memory management and GCs, I wrote a tricolor collector that I intend to integrate in Haskell's GHC :)
<Bike>
i don't understand how you could know what a gc is without knowing why you need one.
<Bike>
lots of stuff allocates memory.
<Bike>
it's definitely not just closures.
<Bike>
(list 1 2 3) => check it, memory
<nitrix>
Because my understanding of Lisp right now is limited to just knowing that it's a tree. I don't know what facilities or libraries or simantics people have created on top of the symbols.
<nitrix>
`lambda` is one that I figure would cause problems, I'm not sure what else exists.
<nitrix>
Too new to this ecosystem yet :)
<no-defun-allowed>
there are types other than lists, symbols and ints in lisp
<Bike>
any data structure?
<no-defun-allowed>
eg: arrays, classes and objects, structs, and some more
<no-defun-allowed>
(let ((a '(nil))) (rplaca a a) a) would not be fun reference counting or doing manual memory management on
<Bike>
Well Actually in that case the list is constant, so it can be done statically (also modifying it is illegal)
<nitrix>
Bike: For the stuff that's allocated, it's relatively simple to deallocate when it goes out of scope, but what can have lifetimes bigger than the scope? Can you even alias memory?
<nitrix>
I saw global variables `*foo*`.
<Bike>
that's a really confusing question.
<Bike>
but yes, for example you could store the value in a global variable.
khisanth__ has joined #lisp
<Bike>
or in a structure referred to in an outer scope, or in a global variable
<Bike>
or you could just return it.
<Bike>
(lambda () (list 1 2 3)) is a function that allocates a list and returns it
<Bike>
it doesn't "go out of scope" like in C++
<nitrix>
Bike: No, but you still have a single owner of the returned value and you can perform escape analysis to know when it'll no longer be used.
<Bike>
not always.
<Bike>
(defun foo () (list 1 2 3))
<Bike>
now anything can call foo
<Bike>
and if anything isn't in the same compilation unit, it won't know the definition of foo
<nitrix>
And this is more or less what I'm asking. Where are the situations where you might start having various pointers to the same memory (memory aliasing)?
<Bike>
so it doesn't know to do any kind of analysis
<Bike>
(let ((x (list 1 2 3))) (let ((y x)) ...)) is aliasing, i guess
<Bike>
similar with calling a function
<nitrix>
Mhhh. Well, I still need more reading on Common Lisp to really be able to ask more intricate questions, but it's good to know there's some of the active community members in here :)
<nitrix>
Clasp looks pretty cool.
<Bike>
it's reasonably active here
m0w has quit [Ping timeout: 240 seconds]
terminal625 has joined #lisp
pjb has quit [Read error: Connection reset by peer]
pjb has joined #lisp
<nitrix>
I'm gonna try to resist the urge of looking at alternative syntaxes that'd eliminate the parens. I hear every couple months people re-invent a new one, haha.
<verisimilitude>
There's a general rule, nitrix, that if one is willing to use a less flexible solution, one can get further than otherwise.
<Bike>
nitrix: generally they make macros more annoying
<verisimilitude>
In the case of older Lisps, things such as functions were lists and all of that.
<verisimilitude>
However, in Common Lisp, functions are opaque.
<verisimilitude>
This is to enable compilation.
<verisimilitude>
Rather than self-modifying code, one can often get away with a solution that generates functions at runtime in known ways, instead.
<verisimilitude>
Instead of having FEXPRs, which allow one to define powerful new forms, macros get you almost all of the way there with the advantage that they can be compiled and done away with.
<verisimilitude>
You get the gist of what I'm telling you, by now, nitrix.
<nitrix>
verisimilitude: Good. The material I stumbled upon must have been pretty old. I'm always pleased to see that there's a divide between theorical and practical, yet we're able to not loose the main essence of what makes Lisp, well Lisp. Cool stuff :)
<verisimilitude>
By all means, I'd be interested in heavily self-modifying code, but I've yet to have a use for it that isn't better covered by something else.
terminal625 has quit [Quit: Page closed]
<verisimilitude>
As an example, the OO system is rather flexible for dynamically changing things and all of that.
Lycurgus has joined #lisp
* nitrix
nods and take notes to look at the OO system eventually.
<verisimilitude>
Oh, well I never need the OO system, either.
Oladon has joined #lisp
<verisimilitude>
But, yes, that does offer much of what self-modifying code does.
<verisimilitude>
It's more structured, though, you could argue.
<nitrix>
Not a big fan either, considering my background, but I guess it can have its uses. Perhaps if I were to implement some sort of concurrent... messaging... actor model or something.
<Bike>
i would say it's pretty unusual to use the modification tolerance features for anything but development
<verisimilitude>
I should specify CLOS, the OO system, is real OO, not what Java and C++ parade around as.
<nitrix>
No surprised there. Lisp and Smalltalk are almost siblings :P
<nitrix>
This will be fun! :)
<verisimilitude>
I've never read the entire source of SHRDLU, but I believe it was an example of a program that, at runtime, analyzed a sentence and compiled a new function purely from that, as self-modifying code.
<verisimilitude>
That's an older Lisp than Common, though.
<verisimilitude>
But, yes, from what I've read from and concerning it, I believe it would traverse the sentence and run arbitrary programs building up a program it then executed.
<verisimilitude>
I may not be quite correct, though, so take it with a grain of salt.
robotoad has joined #lisp
<nitrix>
All good. I verify all the information I run into :)
<verisimilitude>
Know the name for what gives Lisp and others this power, nitrix: homoiconicity
<verisimilitude>
That is the state of using a structure to represent programs and data, permitting programs to create and then execute programs.
<verisimilitude>
Machine code is homoiconic, as an example.
myrkraverk has quit [Ping timeout: 250 seconds]
<no-defun-allowed>
Bike: yep that was a bad and nonconforming example of "aaaaaaa! circular references!" but I hope it still described why GC is needed.
<verisimilitude>
Note that it is not homoiconicity when a program manipulates a string that has the text of a program in it, in general. A string can represent a program in such a language, but is not the actual structured representation of it, such as an AST.
<verisimilitude>
This doesn't apply to languages which do use strings as the actual representation of a program, of which there are a few.
<nitrix>
Man, this Lisp thing has got me really fired up. The past months were dedicated to designing a toy language. Many, many iterations and completely unable to settle on the features that I wanted, so I stripped everything down to the bare minimum. I even considered lambda calculus at some point, then graphs like flow-based programming or the data flow ones, then I figured a tree would be better, then comes the
<nitrix>
challenge of writing/serializing trees, and then earlier this week, it jumped to my face -- I was freaking rediscovering lisp.
<verisimilitude>
That's amusing.
<nitrix>
So, kudos to me to realising, but at the same time, kinda embarassing to go in someone else footsteps and reading about all the work that happened with the language :P
baskethammer has joined #lisp
<verisimilitude>
It could be worse: You could've created Python, nitrix.
FreeBirdLjj has joined #lisp
<nitrix>
verisimilitude: Actually I was very close to creating another LabVIEW...
* nitrix
chuckles.
<verisimilitude>
There's something you should be aware of, nitrix.
<verisimilitude>
I like to think of Lisp as a language good for when you don't know precisely what you're doing.
<verisimilitude>
Two large truths can be derived from this that I'll mention.
<verisimilitude>
The first is if you do have a very well-developed idea of what you're doing, you may want to choose a different language; perhaps you, as I have, developed this idea with Lisp first before moving on to another language.
<verisimilitude>
The second is that you can erect a monument that works, in spite of its horrible design, but is horribly ugly. That is, it's very easy to experiment and wind up with a monster.
<verisimilitude>
That is, you'll get much further with Lisp than you would, say, C when it comes to writing something poorly.
<nitrix>
That's true of every language though. I find that part less frustrating than the moments where I feel constrained by the limitations of the language I'm using.
<verisimilitude>
Anyway, that's my thoughts.
<verisimilitude>
Oh, yes, just be aware is all. I have a monster I'm going to reimplement simpler with Ada soon.
torbo has joined #lisp
<torbo>
#emacs
torbo has left #lisp [#lisp]
<verisimilitude>
On the bright side, I know the Ada will be more than efficient enough, because this monster was wildly inefficient and even interpreting its own language being interpreted by CLISP it was more than fast enough.
FreeBirdLjj has quit [Ping timeout: 240 seconds]
<nitrix>
verisimilitude: It's probably just a phase, sort of, maybe. If all goes to hell, I can still go back to Go or C. I'm not sure I want to do Haskell again. That's another one where you can build abstraction monsters :)
myrkraverk has joined #lisp
torbo has joined #lisp
<verisimilitude>
What is it, torbo?
Lord_of_Life has quit [Ping timeout: 250 seconds]
<nitrix>
verisimilitude: It's probably just a phase, sort of, maybe. If all goes to hell, I can still go back to Go or C. I'm not sure I want to do Haskell again. That's another one where you can build abstraction monsters :)
<torbo>
Sorry -- typo.
<nitrix>
** At some point, I need to build things with the toy languages and crap that I'm designing and I think this is where lisp will force me the hand.
<verisimilitude>
Oh, well you have me asking in #emacs what the commotion was about.
<verisimilitude>
Oh, on the topic of C, did you read about that SQLite bug yet, nitrix?
SaganMan has joined #lisp
Lord_of_Life has joined #lisp
<nitrix>
The news seems to make it bigger than it was when I saw the fix commits.
<nitrix>
I still view SQLite as one of the best tested project out there.
<nitrix>
It's like the Alcatraz of the software world :P
<nitrix>
Mhhh, maybe I should've used a car analogy. Those always work.
<verisimilitude>
I found this section most important:
<verisimilitude>
``prior to 3.25.3, appear to contain an integer overflow bug which can be triggered by manually modifying the fts index data. A careful application of this integer overflow appears to make it possibe to truncate a writable buffer, leading to a nice heap overflow condition''...
<verisimilitude>
You're not going to need to worry about integer overflow or heap overflow in Common Lisp.
<nitrix>
Yeah, and the commit was saying some thing like "fixing an exploit for a carefully crafted corrupt database".
<verisimilitude>
It's a great example of how C is seemingly designed to be impossible to program correctly in.
<nitrix>
I didn't read all the details, but from the devs end, it was worded as something that would not happen naturally.
mrcom has quit [Read error: Connection reset by peer]
<verisimilitude>
I have a simple idea that I believe should apply to software, nitrix:
<verisimilitude>
If something is undesirable in every circumstance, make it impossible.
<verisimilitude>
Memory corruption is never desirable, unless you're employed by a security agency, so a language can eliminate many classes of errors by simply making it impossible.
<verisimilitude>
But there's not a single flaw in C that wasn't already fixed when it was created, really.
<nitrix>
verisimilitude: I've heard that point of view many times and debated it from both sides as well. I somewhat agree, which is why I prefer stictly typed functional languages, but while bugs like these are miserable, honestly any project in any language reveals absurd bugs at some point. If the tests are thorough, you mitgate the risks and it's fine.
<nitrix>
I don't like classifying things as either good or bad. There's already too much of that happening in our industry.
neirac has quit [Remote host closed the connection]
<nitrix>
It's not binary. People build accomplish amazing things with C that have yet to be repeated with other languages, so it'll take me a lot more to strike it off:)
<nitrix>
verisimilitude: Here's a funny one; we have a service at work written in node.js, and one of the dependency packages got infected by a bitcoin miner. Our servers have been mining bitcoin on production for 6 months without us noticing.
<no-defun-allowed>
yeah, i heard they've gone around infecting node packages
<nitrix>
The tests passed. As far as we were concerned, everything looked good.
<nitrix>
Clever monkeys.
<Bike>
you added a new test that you weren't mining bitcoin, i'm sure
<hectorhonn>
are there malicious packages like that in quicklisp?
elderK has joined #lisp
<nitrix>
Not really. But those packages lost their priviledge of us blindly upgrading their patch versions. It's locked to major versions with something absurd like 20 code owners so that it doesn't slip again.
<nitrix>
I wasn't familiar with the idea of trust as a part of the development equation. Yay, learning!
<beach>
Good morning everyone!
<nitrix>
(good 'ning beach)
<verisimilitude>
I think it's a bit of a jump to compare a bitcoin miner hidden in code that wasn't properly audited to a language in which any mistake doesn't give rise to exceptions or program death or other things but silent vulnerabilities.
<beach>
nitrix: Knowing statically when some allocated object is no longer used requires solving the halting problem.
<verisimilitude>
I inspect code I use, hectorhonn; you'd do good to do the same.
<verisimilitude>
Now, Quicklisp, with its QUICKLOAD, doesn't have the option to just download code without loading it, right?
<verisimilitude>
If it does, it's certainly not advertised.
<no-defun-allowed>
that's also a very good argument for gc, beach
<hectorhonn>
:O
<hectorhonn>
verisimilitude: then its just like piping code into the shell, no?
<beach>
nitrix: Furthermore, there is no reason to be afraid of tracing garbage collectors. A modern garbage collector is faster than malloc/free.
<verisimilitude>
Effectively, yes, hectorhonn.
<beach>
nitrix: It is easy to think that malloc/free are efficient, but they have to do a lot more work than a garbage collector does.
<hectorhonn>
i suppose npm install does the same
<no-defun-allowed>
good gcs, for almost every alloc, just have to bump a pointer, unlike malloc/free which usually retain free trees or lists or the like
FreeBirdLjj has joined #lisp
<nitrix>
beach: It doesn't need to be that crazy. Without knowing exactly when it'll no longer be used, you're still able to guarantee in some cases that when it does no longer gets used, you can collect the garbage. The discussion earlier was about which of those cases are deterministic and which ones cannot be determined without adding reference counting or GC.
<nitrix>
beach: Obtaining memory pages, chopping them into slabs of various sizes, preventing fragmentation, sure.
<beach>
Oh dear.
<nitrix>
You don't alleviate that cost with a GC though, you can simply amostize it if you use strategies like generational nurseries and stuff.
torbo has left #lisp ["ERC (IRC client for Emacs 26.1)"]
<verisimilitude>
I read something interesting that bashed generational GC on the basis of not being as good as many go around claiming.
<nitrix>
Anyway. I prefer bringing that discussion back once I understand lisp better. It'll be more productive :)
<verisimilitude>
For what it's worth, I'd mostly care about those objects that can be allocated on the stack and those that can't; machines are more than fast enough already.
<verisimilitude>
A more complex GC machinery has more potential for flaws.
FreeBirdLjj has quit [Ping timeout: 250 seconds]
<nitrix>
verisimilitude: You may be able to deterministically insert a deallocation for something that was allocated earlier though. I don't know enough of the details yet, but sometimes playing with the semantics just a little bit is enough to get rid of a GC.
<verisimilitude>
Oh; Common Lisp tends to work on a per-function basis, rather than global program optimization, nitrix.
<nitrix>
verisimilitude: e.g. lambda that'd copy the whole environment instead of chaining them.
<verisimilitude>
Of course, all code is a function already, but you get the idea.
<verisimilitude>
I don't follow that last message, nitrix.
<beach>
nitrix: Like I said, manual memory management is slower than automatic memory management.
<beach>
Furthermore, as Paul Wilson says, liveness is a global property, so with manual memory management, it is much harder to obtain modular code.
<nitrix>
verisimilitude: lambda are a common source of memory aliasing, since they capture the variables that are in scope/environment, so if you're doing escape analysis, you detect that you cannot allocate the captured variables on the stack, they have to be allocated, and thus, you cannot simply deallocate them at the end of the procedure because your lambda may still use it and you'll be giving the lambda to other
<nitrix>
parts of the program.
<verisimilitude>
You may be interested to know that Common Lisp permits declaring aspects of a program, such as whether it should or can't be inlined, and that one of these is DYNAMIC-EXTENT, which lists variables that don't live past the dynamic extent of the function, nitrix.
torbo has joined #lisp
<verisimilitude>
Or, rather, the dynamic extent of the unit with the declaration.
<nitrix>
beach: Wise words, I like it.
<nitrix>
Actually... has anyone tried a tricolor gc for any lisp yet?
<nitrix>
o:
<beach>
Probably.
<nitrix>
I'd be the first!
<verisimilitude>
In a manually-managed environment, the conventional wisdom is to leave allocation to the caller.
<nitrix>
Shush, haha.
<Bike>
i mean, probably, it's the first gc algorithm anyone learns practically
<nitrix>
Not a tracing, a tricolor.
<beach>
nitrix: However, the free implementations are quite old, so they often do not have state-of-the-art garbage collectors.
<beach>
nitrix: tricolor gc counts as tracing.
<beach>
Bike: Nah, mark-and-sweep is usually the first one in the literature.
<nitrix>
Fine. Gosh I need to step up my pedant side. Haven't been on IRC in a while.
<Bike>
oh yeah
<nitrix>
beach: What I meant is your basic mark-and-sweep & stop-the-world gc, vs. the concurrent tricolor ones.
<beach>
I understand.
<verisimilitude>
I'd be interested in a O(1) memory GC, but this seems like something that's not possible and I simply lack a good proof of it being impossible.
<beach>
verisimilitude: What does that even mean?
<nitrix>
Reference counting is O(1).
<no-defun-allowed>
realtime gc?
<nitrix>
But it has the problem of not dealing well (or at all) with cycles.
<no-defun-allowed>
rc isn't O(1) since there are dependencies involved
<nitrix>
Though... can you form cycles with lisp?
<verisimilitude>
What I mean is a garbage collector that can manage an arbitrary amount of objects with a constant memory usage.
<beach>
nitrix: Of course!
<no-defun-allowed>
rc is a garbage collector like a kid's crawling car thing is a sports car
<verisimilitude>
It seems like O(N) with a bit per object is the smallest practical memory usage for a GC, without going to single bit multiple object methods.
mrcom has joined #lisp
<verisimilitude>
You still store the reference count in reference counting, nitrix.
<nitrix>
That's interesting. I had never considered the bit thing. Imagine a bloom filter gc :D
<verisimilitude>
This is basically the oldest GC method, nitrix.
<verisimilitude>
You just reserve a bit to mark it against later sweeping.
<nitrix>
That's mark and sweep.
<verisimilitude>
Yes.
<nitrix>
What you suggested was cooler. Like a bitset where you do bloom filter of the memory addresses on it.
<nitrix>
So that you can probe it later to know if it's in use or not.
<verisimilitude>
Oh, you mean where I suggested using a single bit for multiple data?
<verisimilitude>
That just arose from my thinking that a bit per object was as small as it could get, being O(N), before realizing I could have a bit for two objects and just use redundant testing and then you can take that as far as you want with a ``fractional'' bit per object.
jack_rabbit_ has quit [Ping timeout: 250 seconds]
<no-defun-allowed>
anyway, point is rc isn't particuarly efficient, not good with cycles and isn't O(1) cause you have to recurse down and update pointer's rcs too
<nitrix>
You'd have no false positives, but possibly some false negatives, which is maybe okay. Memory that is no longer used but has to stay because of other values mapping to the same bits.
<nitrix>
I wonder if that'd be practical.
<no-defun-allowed>
sounds similar to a conservative gc, but even more conservative lol
<no-defun-allowed>
i guess a heuristic i had in mind is "if some values are live in a page, odds are the rest of the page is also live" but you'd probably still want to trace those values down too which might be slower
<verisimilitude>
All that would be necessary, nitrix, is, say, shuffling data on top of dead data so the single bit for the multiple data is accurate again.
<nitrix>
verisimilitude: If you take it really far, you're looking up the address in a tree, but then you gain your O(log n) back :P
<no-defun-allowed>
cause you'd have to scan those false negatives too
<verisimilitude>
Yes; I never considered a page as a unit, but you could have a single bit for an entire page, as a sort of ``rubble GC'' as in, ``Well, someone's still alive in there.''.
<nitrix>
That leads to fragmentation and compacting GCs :/
<verisimilitude>
But, this seems like the only method available for reducing the memory a GC requires.
<verisimilitude>
At the same time, there are other memory considerations, such as using less memory overall, which this could get in the way of.
<nitrix>
Do people worry about memory usage?
<verisimilitude>
So, I only regarded it as a neat idea.
<nitrix>
I'm normally more worried about pause time.
<verisimilitude>
That's part of why I'm rewriting a Lisp program in Ada soon, nitrix.
<verisimilitude>
I look at the ECL process and it's consuming thirty megabytes for my small program.
<verisimilitude>
I've done the math and the program could easily consume less than a quarter of a megabyte.
<no-defun-allowed>
hmm, admittedly the more memory free you have the less often you have to GC
<no-defun-allowed>
i don't buy into the "gc means you use 128301298312093810239 times more memory" stuff though, it's just a convenient tradeoff since using unused RAM is more convenient than using CPU time you probably wanted to use for other code
<no-defun-allowed>
*1283...239 is keyboard mashing, not an actual statistic, but i swear people have told me there's a 3 to 5x overhead citing some java writeup or something
rumbler31 has joined #lisp
<verisimilitude>
Well, I wasn't claiming that, no-defun-allowed.
<nitrix>
I need two things. No overhead for FFI with C and low pauses GC.
<nitrix>
Then I'm a happy lisper :)
<no-defun-allowed>
additionally, sbcl also carries the compiler, debugger and other nice stuff in memory so it'll be fairly large
<verisimilitude>
Yes.
<no-defun-allowed>
i have a sbcl image just started with only ql loaded and it seems to be at 29.6mb after a full gc
rumbler31 has quit [Ping timeout: 250 seconds]
dddddd has quit [Remote host closed the connection]
emaczen has quit [Ping timeout: 250 seconds]
<nitrix>
Bed time. Thanks for the chat, ideas, thinkering and all :)
<nitrix>
Cheers, good night!
torbo has quit [Remote host closed the connection]
sauvin has joined #lisp
jack_rabbit_ has joined #lisp
<elderK>
Lo all :)
<beach>
Hello elderK.
<elderK>
Helloha Beach!
<beach>
Kia Ora
ggole has joined #lisp
<elderK>
:D
varjag has joined #lisp
Oladon has quit [Quit: Leaving.]
varjag has quit [Ping timeout: 272 seconds]
baskethammer has quit [Quit: WeeChat 1.9.1]
baskethammer has joined #lisp
igemnace has joined #lisp
igemnace has quit [Client Quit]
torbo has joined #lisp
FreeBirdLjj has joined #lisp
<pfdietz>
verisimilitude: DDR4 is going for less than $0.01 per megabyte, so I don't think a 30MB footprint is terribly bothersome.
FreeBirdLjj has quit [Ping timeout: 268 seconds]
torbo has quit [Remote host closed the connection]
Xof has quit [Ping timeout: 246 seconds]
<verisimilitude>
It's satisfying to write an efficient program, pfdietz.
vlatkoB has joined #lisp
baskethammer has quit [Quit: WeeChat 1.9.1]
dale has quit [Quit: dale]
themsay has quit [Ping timeout: 246 seconds]
themsay has joined #lisp
nchambers has quit [Quit: WeeChat 2.2]
resttime has quit [Ping timeout: 246 seconds]
Inline has quit [Quit: Leaving]
Bike has quit [Quit: Lost terminal]
<elderK>
verisimilitude: I agree with you.
<elderK>
I also think the "Eh, memory is cheap" thing is lame.
Necktwi has joined #lisp
<beach>
I don't know what you agree with, but it's all a matter of cost of computing resources vs cost of human resources.
<beach>
At some point, making the code smaller, faster, whatever, is going to take so much more human effort that the gain in terms of smaller size and the higer speed is going to be dwarfed by the additional human resources that need to be put in, both initially and for maintenance.
rnmhdn has joined #lisp
<beach>
My favorite example from industry of making the wrong decision is "We need all the speed we can get, so we choose C++ for this project".
<elderK>
beach: That's true. Perhaps I joined the conversation late. You have to be smart about it, for usre.
<elderK>
Optimizing to the Nth degree is not always worth it - no matter what you're optimizing.
<elderK>
As always, profile. And do it only when necessary.
<elderK>
beach: And aye - I've seen that "reason" for C++, too. Usually in web situation where something else would be much... better.
<ggole>
Having tools remove unused code isn't much of a burden
<ggole>
However, 'unused code' is a more slippery concept in Lisp
<beach>
And it's often worse than that. Typically people who know nothing about good algorithms and data structures are the ones who go for micro optimizations, presumably because that's all they know. So they waste orders of magnitude more computing resources AND human resources on their quest for performance.
<elderK>
beach: Agreed! I was going to say that.
<elderK>
People will spend hours optimizing say, a linked list thing.
<elderK>
When they should just say F@!#$ the list, let's use a hash table.
<elderK>
Or some kind of balanced tree.
<elderK>
I saw that at my last workplace. For some reason, everyone used lists for everything - even when they were searching for things int he list constantly. The list had tens of thousands of entries...
<elderK>
As always: Pick the right data structure.
robdog_ has joined #lisp
FreeBirdLjj has joined #lisp
FreeBirdLjj has quit [Ping timeout: 250 seconds]
quazimodo has quit [Remote host closed the connection]
orivej has joined #lisp
<verisimilitude>
What I'm doing, elderK, is planning a simpler reimplementation of a Common Lisp program in Ada 2012.
<verisimilitude>
The Common Lisp program works, but has the vestige of several rewrites, as I was experimenting with it, and many tangential things that I'm going to discard. Despite everything, it's still instantaneous, even under CLISP, so I know it will be more than fast enough under Ada, more than fast enough for it to support all of my preconditions, postconditions, type invariants, and assertions.
<verisimilitude>
I also wanted to write a program that can properly survive memory exhaustion and other things, along with consuming an appropriate amount of memory, being usable on a machine with an OS that lacks a Common Lisp implementation I've been able to compile, and being easier to distribute in a compiled form.
<verisimilitude>
So, I can assure you I'm not some idiot doing this because ``It's fast!'' or any such thing.
<verisimilitude>
That this is a machine code development tool would only make it worse if it were inefficient, like using Electron for a text editor.
<verisimilitude>
The programs that are purely for function that I've written are in no state for distribution and so they stay with me and me alone. The programs I distribute I want to be able to be proud of and this is one of them.
<jackdaniel>
wow, easy satan, this will error asdf while it locates system, no?
<heisig>
phoe: An excellent read. One suggestion: When warning people of printing large arrays to the REPL, you could mention the special variables *print-length* and *print-circle*.
<phoe>
jackdaniel: yep, it will
<phoe>
heisig: thanks! Will do in a moment.
<phoe>
heisig: I'll mention *print-length* since it directly applies to my case. The vector in question is not circular.
<phoe>
Made the change.
<shka_>
i put comments in the comments
<heisig>
phoe: Oh, I made a typo. I meant *print-level* instead of *print-circle*. But *print-circle* is useful, too :)
<jackdaniel>
;;; foo #|bar is ;;very #+(or)cryptic #|?|# |# ?
<phoe>
jackdaniel: ewww
rumbler31 has joined #lisp
<jackdaniel>
I put comments in comments too!
<phoe>
(:
<phoe>
shka_: "C pointer of such a vector" is more correct in my opinion. We don't get the pointer to the vector itself; we get the pointer to the data region of that vector. Using C notation, it's not a simple &lisp_vector; it's actually lisp_vector.data
<shka_>
phoe: C does not care
random-nick has joined #lisp
orivej has quit [Ping timeout: 245 seconds]
<phoe>
shka_: also, yes, luck is a matter, since I could stumble upon a library that is not designed with external buffers in mind. mfiano just encountered that in SDL2 which makes it very troublesome to use static-vectors.
rnmhdn has quit [Ping timeout: 250 seconds]
<shka_>
design is never a matter of luck
<phoe>
OK, I agree with that one
rnmhdn has joined #lisp
<phoe>
"I think that you may need to warn reader of low level differences between C integer and lisp fixnum (among other things)." It's not the case with uint8-specialized static vectors.
<shka_>
it is not indeed
<phoe>
OK - I have applied some of your suggestions then.
<akanouras>
jackdaniel: Imagine a total newbie (me) trying to quickload cl-async in CLISP and getting a very cryptic error (with no mention of static-vectors iirc) because of a missing argument to :pathname ...
shka_ has quit [Quit: WeeChat 1.9.1]
<beach>
akanouras: Maybe some other implementation gives better messages. Do you have any particular reason for using CLISP?
<beach>
akanouras: And, why are you telling jackdaniel about this?
<verisimilitude>
Why are you, as a ``total newbie'', using cl-async, akanouras?
<ogamita>
clisp gives good messages in general. Even localised messages, useful to a lot of newbies.
<ogamita>
But verisimilitude is asking the right question.
<akanouras>
Just responding to jackdaniel's earlier comment when he saw the (error) in static-vectors.asd 🙂
robotoad has quit [Quit: robotoad]
<verisimilitude>
If you're actually new to the language, you should just use the standard language, akanouras, and let libraries such as this for later, I'd think.
<akanouras>
I had just found out about Common Lisp, and was exploring CL implementations and ways to program network applications. I think it took me quite a while to locate static-vectors and sent the commit to help others who would get in the same situations in the future.
<pfdietz>
Perilously close to blame the user here. If a project is not new user friendly, that's IMO always an issue for the project. Maybe using it was ill advised, but still.
<akanouras>
Btw, I already had experience with python-gevent and was trying to find something similar in CL land.
<jackdaniel>
beach: it is more a fault of static-vectors asd definition
<jackdaniel>
it has #.(error "cryptic error") for unsupported implementations
<verisimilitude>
I just meant so one would understand the reason for errors better, etc., pfdietz.
<beach>
jackdaniel: Got it.
varjag has quit [Quit: ERC (IRC client for Emacs 25.2.2)]
Lycurgus has quit [Quit: Exeunt]
Bike has quit [Quit: Lost terminal]
FreeBirdLjj has joined #lisp
FreeBirdLjj has quit [Ping timeout: 250 seconds]
robdog has joined #lisp
robotoad has joined #lisp
jack_rabbit_ has quit [Ping timeout: 268 seconds]
gravicappa has joined #lisp
mhd2018 has joined #lisp
<phoe>
aeth: I think you suggested that split-sequence is slow on lists
Kaisyu has quit [Quit: Connection closed for inactivity]
rnmhdn has joined #lisp
<pjb>
This is why you should either write your own libraries, or read all the dependencies your bring in before using them. Clearly, writing your own is the less expensive option most often.
<mfiano>
This isn't a dependency.
<pjb>
It was about split-sequence.
<pjb>
mfiano: the first question is why you have vectors of vectors instead of 2D arrays?
<pjb>
Then, with arrays, you can just play with indices.
<mfiano>
I don't control that...it is provided by a third-party library as its output, and a displacement would not be able to be optimized as I'd like
<pfdietz>
I think fast transpose algorithms tend to be block oriented, to improve cache locality.
<pjb>
read or write ?
<pjb>
My write-cache is optimized.
<pfdietz>
Both?
<pjb>
You can't :-)
<pjb>
That's the problem with vectors of vectors.
<pjb>
With a 2D arrays you could do it recursively.
<pjb>
block by block.
<pfdietz>
Yes
Mr-Potter has joined #lisp
wusticality has quit [Ping timeout: 246 seconds]
rnmhdn has quit [Ping timeout: 246 seconds]
rnmhdn has joined #lisp
ggole has quit [Quit: ggole]
orivej has joined #lisp
random-nick has quit [Remote host closed the connection]
orivej has quit [Ping timeout: 246 seconds]
FreeBirdLjj has joined #lisp
jmercouris has quit [Remote host closed the connection]
FreeBirdLjj has quit [Ping timeout: 244 seconds]
wusticality has joined #lisp
random-nick has joined #lisp
orivej has joined #lisp
robotoad has quit [Quit: robotoad]
orivej has quit [Ping timeout: 246 seconds]
milanj has quit [Quit: This computer has gone to sleep]
shka_ has joined #lisp
orivej has joined #lisp
orivej has quit [Ping timeout: 246 seconds]
wusticality has quit [Ping timeout: 240 seconds]
random-nick has quit [Read error: Connection reset by peer]
meepdeew has joined #lisp
SoftDed has joined #lisp
SoftDed has quit [Excess Flood]
meepdeew has quit [Remote host closed the connection]
FreeBirdLjj has joined #lisp
rnmhdn has quit [Ping timeout: 240 seconds]
rnmhdn has joined #lisp
FreeBirdLjj has quit [Ping timeout: 250 seconds]
rnmhdn has quit [Ping timeout: 246 seconds]
shifty has quit [Ping timeout: 250 seconds]
random-nick has joined #lisp
rnmhdn has joined #lisp
wusticality has joined #lisp
robotoad has joined #lisp
rnmhdn has quit [Ping timeout: 246 seconds]
rnmhdn has joined #lisp
torbo has joined #lisp
rnmhdn has quit [Ping timeout: 250 seconds]
rnmhdn has joined #lisp
myrkraverk has quit [Ping timeout: 250 seconds]
rnmhdn has quit [Ping timeout: 272 seconds]
nanoz has quit [Ping timeout: 272 seconds]
rnmhdn has joined #lisp
<phoe>
I have a "parent" thread that spawns "child" threads. Any of the child threads may signal an error. How can I propagate this error to be handled in the parent thread?
scymtym has joined #lisp
orivej has joined #lisp
<phoe>
I want the child to die if such is the case, and the error to be signaled in the parent.
<verisimilitude>
Doesn't BORDEAUX-THREADS provide an INTERRUPT-THREAD function?
<phoe>
Sure it does, I just wonder if there is a different way of achieving that.
<verisimilitude>
From my last reading of the documentation, that's the only means to get this manner of behavior from it I recognized.
<phoe>
Currently, inside the parent, I ignore the return value of the thread function. At the same time, parent waits for all children to join. Perhaps I can create a handler that simply returns the condition object on error, and signal that condition once it is in the parent thread.
rnmhdn has quit [Ping timeout: 250 seconds]
rnmhdn has joined #lisp
shka_ has quit [Ping timeout: 272 seconds]
<pjb>
phoe: the parent thread can call thread-join on the child thread to collect the result.
<phoe>
pjb: yes, I will use that one.
<pjb>
phoe: otherwise, you can create mailboxes or other communication channels between your threads.
Cymew has joined #lisp
warweasle has quit [Quit: rcirc on GNU Emacs 24.4.1]
varjag has joined #lisp
Cymew has quit [Client Quit]
wusticality has quit [Ping timeout: 246 seconds]
pierpal has quit [Ping timeout: 244 seconds]
<phoe>
Something's wrong.
<phoe>
I wrote seventy lines of Lisp that utilize bordeaux-threads to parallelize another two hundred and eighty lines of Lisp code.
<phoe>
It all worked on the first try and passed all the tests.
wusticality has joined #lisp
jack_rabbit_ has joined #lisp
torbo has quit [Remote host closed the connection]
<pjb>
Oops!
<pjb>
Be worry!
<pjb>
Write more tests!
<phoe>
I did
<phoe>
And it passed them too
<phoe>
I am highly worried
<pjb>
Add more threads, change number of cores?
<pjb>
You can run on qemu or virtual box simulating different numbers of cores.
<pjb>
Next step: try to prove it formally!
<pjb>
And prepare for a debugging session with very long ping times.
mhd2018 has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
mhd2018 has joined #lisp
Zelmin has joined #lisp
rnmhdn has quit [Ping timeout: 246 seconds]
wusticality has quit [Ping timeout: 250 seconds]
FreeBirdLjj has joined #lisp
FreeBirdLjj has quit [Ping timeout: 246 seconds]
nicksmaddog has quit [Quit: Leaving]
vlatkoB has quit [Remote host closed the connection]
razzy has joined #lisp
m0w has quit [Ping timeout: 250 seconds]
rippa has quit [Quit: {#`%${%&`+'${`%&NO CARRIER]
atgreen has quit [Ping timeout: 240 seconds]
mrcom has quit [Read error: Connection reset by peer]
Mr-Potter has quit [Quit: Leaving]
scymtym has quit [Ping timeout: 250 seconds]
wusticality has joined #lisp
FreeBirdLjj has joined #lisp
wusticality has quit [Ping timeout: 240 seconds]
FreeBirdLjj has quit [Ping timeout: 250 seconds]
<zigpaw>
or... be an optimist and assume you got everything alright on the first try :P :D
random-nick has quit [Ping timeout: 246 seconds]
<margaritamike>
Folks thanks to your help, one of the most popular online judges right now -- Kattis -- now supports SBCL 1.4.5!
<margaritamike>
Now you can solve all of the lovely algorithmic problems that the platform supports with SBCL, including the problems from recent ICPC contests!
ebrasca has quit [Remote host closed the connection]
dyelar has quit [Quit: Leaving.]
<verisimilitude>
What is this Kattis, margaritamike?
<margaritamike>
The great rejuvenation is upon us 8)
<margaritamike>
verisimilitude: It's a platform for solving toy algorithmic problems, and they can get quite intense.
<margaritamike>
There are constraints placed on you as well.
<margaritamike>
Your solution has to run, and finish, below a certain memory limit and under a certain amount of time.
<margaritamike>
It's compared against a large number of test cases for the given problem to ensure whether it's correct or not.
<margaritamike>
And there's a ranking system!
<margaritamike>
Additionally, you can see how your submission compared against others, in other languages, in terms of time taken to finish running -- if it works -- and the memory footprint.
<margaritamike>
This Google Code Jam, but year round.
<margaritamike>
These kinds of platforms are usually used by highschoolers, university students, and hobbyists.
<margaritamike>
However the languages used are those you typically see in the industry, C++ being the most popular and usually the speediest, C, Java, Python2/3.
<margaritamike>
Highschool -> IOI; Collegiate -> ICPC
<verisimilitude>
So, I take it by adding SBCL support, they remove the memory minimum it imposes and uses that to calculate program memory consumption?
<margaritamike>
With this language available on this platform to people who are learning algorithms, or doing this as a hobby or whatever, it allows for them to be exposed to using Common Lisp in a sandboxed environment to really take their stab at the ins and outs of the language. With these problems needing various algos and data structs, participants will have to explore at least a fair bit of the language.
<verisimilitude>
In any case, that Common Lisp is available is interesting, yes, margaritamike.
* margaritamike
pops champagne
robotoad has quit [Ping timeout: 246 seconds]
robotoad has joined #lisp
gravicappa has quit [Remote host closed the connection]
<margaritamike>
Hopefully the Codeforces platform will add it next, another highly regarded platform.
gxt has quit [Ping timeout: 268 seconds]
mrcom has joined #lisp
<margaritamike>
Now I need to setup my emacs to work with Common Lisp better. Need stuff like auto-complete and documentation lol.
pierpal has joined #lisp
wusticality has joined #lisp
akoana has left #lisp ["Leaving"]
zmv has joined #lisp
notzmv has quit [Ping timeout: 246 seconds]
<pjb>
margaritamike: thanks. Perhaps you could write a page on http://cliki.net for future newbies?
<margaritamike>
pjb: thank you for the help :))
<pjb>
margaritamike: and link it from https://cliki.net/Exercices below "Some automatic on-line programming series accept lisp submissions, or lisp produced results, including".
wusticality has quit [Ping timeout: 250 seconds]
<verisimilitude>
You could always use CLISP purely for its Readline support, margaritamike.
pierpal has quit [Ping timeout: 240 seconds]
pierpal has joined #lisp
atgreen_ has joined #lisp
atgreen has quit [Ping timeout: 245 seconds]
<margaritamike>
pjb: done ;)
<margaritamike>
Well didn't write a page, but added it to exercises