cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
<nanonyme>
Anyone present to help with CFFI question? If I wanted to create memory buffer in CFFI which I expecte to end up being memory-managed by Python afterwards, what primitives should I use?
<nanonyme>
Or rather create the buffer in C function called through CFFI.
<marky1991>
possibly silly question, but is it possible to implement true concurrency using rpython?
<marky1991>
right now, I've got an interpreter for my language that works fine, but now I'm thinking about how I can do concurrency
<marky1991>
I know I can just support multiprocessing trivially, but is it possible for an interpreter written in rpython to support meaningful multithreading?
<marky1991>
I can't imagine how it would be, but rpython is sophisticated and I'm not
<marky1991>
this is all for my own personal interpterer for my own personal programming language, so no constraints there.
<simpson>
Depends on what "true" means. select.select() is one version and it's available. POSIX threads are another version and they're available too.
<simpson>
In my project, we used rffi to bind libuv, and then libuv wraps a bunch of stuff and eventually calls select() or equivalent.
<marky1991>
i'd like the ability to interpret multiple app-level bytecodes at the same time
<marky1991>
possibly with some form of locking in the interpreter when needed
<simpson>
On two different cores? Or by SWAR'ing multiple instructions into single operations?
<marky1991>
on two different cores, yes
<simpson>
Sure, RPython will give you a GIL automatically for the core structures, AIUI, so you don't need to do extra locking around the JIT if you use threads.
<cfbolz>
nope, rpython does not give you a gil. you need to implement one in your interepreter if you need that
<cfbolz>
true concurrency is not easy with rpython
<cfbolz>
the GC isn't concurrent, nor is the JIT really thread-safe
<marky1991>
hmm
<marky1991>
so then I guess it really basically has to behave mostly like python then
<marky1991>
that's what I figured mostly
<simpson>
cfbolz: Aha, thanks.
<cfbolz>
yeah, sorry
<simpson>
marky1991: What's an example language runtime that does what you want, for comparison? I'm still puzzling over "true" here.
<simpson>
cfbolz: No, no, that's really important. I was misled and was about to mislead others. Thanks.
tsaka__ has quit [Ping timeout: 260 seconds]
<marky1991>
I don't know of one really, but basically a python with no GIL
<marky1991>
C obviously, but that's way lower level
<marky1991>
I guess java is a better match actually
<simpson>
Java trades Python's GIL for a bunch of fine-grained locks, IIUC. It can be made fast, but isn't automatically fast.
<marky1991>
right
<marky1991>
I was wondering if I could do something like that in rpython
<marky1991>
I don't presently expose threads in my language, so that would be step one, now that I think about it
<marky1991>
but exposing threads in a language like python still seems marginally useful to me
<marky1991>
only marginally useful I mean
<simpson>
Probably. It costs sanity, though. To bring over a meme from #erights, pick two of three: Sequential execution with mutable state, blocking synchronous I/O during execution, or sanity.
<simpson>
Python picks (1) and (2), just like C and Java. An example (2) and (3) system would be Erlang. (1) and (3) systems are like JS in the browser, or E (or my language Monte!)
<simpson>
In a (1) and (2) system, true concurrency is multicore with threads, just like you already know. In (2) and (3) systems, true concurrency is still multicore, but programs are broken up into many small pieces and don't read top-to-bottom.
<simpson>
And in (1) and (3) systems, multiprocessing is much easier than multicore, but it's still possible to be multicore by breaking up computational domains somehow. Monte and E call them "vats", but you probably already know browsers having "tabs".
<simpson>
(This is why "true concurrency" is such a slippery phrase. Doing two things at once is easy when you have two cores, but the structure of the system determines how you structure doing those things.)
07EAAMG61 has quit [Remote host closed the connection]
lazka has quit [Quit: bye]
marvin has joined #pypy
lazka has joined #pypy
epsilonKNOT has joined #pypy
marvin has quit [Remote host closed the connection]
marvin_ has joined #pypy
wilbowma has joined #pypy
<wilbowma>
I'm trying to move all my long-running python programs to PyPy, but noticed a strange performance behavior on one. The ananicy daemon ( [ WGH ]
<wilbowma>
https://github.com/Nefelim4ag/Ananicy) spend a lot of time asleep and doesn't use much CPU or memory in python3, but uses a ton of CPU time in PyPy and 4x memory.
<wilbowma>
I expected the memory increase but not the CPU increase. Anyone know why? (sorry for the copy/paste error)
jacob22 has quit [Read error: Connection reset by peer]
jacob22 has joined #pypy
rfgpfeiffer has quit [Ping timeout: 260 seconds]
otisolsen70_ has quit [Quit: Leaving]
<Hodgestar>
wilbowma: That seems like very odd behaviour for a small Python program that is supposed to mainly sleep. Do you know which bit of it is spending all the CPU time in?
<wilbowma>
No. I'm not familiar with the code base and haven't yet tried to debug further, thinking it might be a known limitation or some kind.
<fijal>
there are lots of known limitations, most complex software that exists have those
<fijal>
which *one* of the known limitations you possibly could have hit (or a new one) is something we can't guess
<wilbowma>
`run` is the main loop and spends most of its time sleeping, so that makes some sense. Given that the results are the same, this doesn't tell me much.
<wilbowma>
Actually, in python3 the results differ in perhaps a significant way:
<wilbowma>
98.4% .... run 98.8% /usr/bin/ananicy:727