cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
jcea has quit [Quit: jcea]
tsaka__ has quit [Remote host closed the connection]
rfgpfeiffer has joined #pypy
oberstet has joined #pypy
otisolsen70_ has joined #pypy
otisolsen70_ has quit [Quit: Leaving]
otisolsen70 has joined #pypy
tsaka__ has joined #pypy
mattip has quit [Ping timeout: 256 seconds]
mattip has joined #pypy
nanonyme has joined #pypy
<nanonyme> Anyone present to help with CFFI question? If I wanted to create memory buffer in CFFI which I expecte to end up being memory-managed by Python afterwards, what primitives should I use?
<nanonyme> Or rather create the buffer in C function called through CFFI.
tsaka__ has quit [Ping timeout: 272 seconds]
tsaka__ has joined #pypy
rfgpfeiffer has quit [Ping timeout: 244 seconds]
rfgpfeiffer has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8315 [mattip: force build, py3.7]
<bbot2> Started: http://buildbot.pypy.org/builders/own-win-x86-32/builds/2433 [mattip: force build, py3.7]
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5470 [mattip: force build, py3.7]
<mattip> nanonyme: do you mean like ffi.gc() ?
<mattip> but read the caveats
<marky1991> possibly silly question, but is it possible to implement true concurrency using rpython?
<marky1991> right now, I've got an interpreter for my language that works fine, but now I'm thinking about how I can do concurrency
<marky1991> I know I can just support multiprocessing trivially, but is it possible for an interpreter written in rpython to support meaningful multithreading?
<marky1991> I can't imagine how it would be, but rpython is sophisticated and I'm not
<marky1991> this is all for my own personal interpterer for my own personal programming language, so no constraints there.
<simpson> Depends on what "true" means. select.select() is one version and it's available. POSIX threads are another version and they're available too.
<simpson> In my project, we used rffi to bind libuv, and then libuv wraps a bunch of stuff and eventually calls select() or equivalent.
<marky1991> i'd like the ability to interpret multiple app-level bytecodes at the same time
<marky1991> possibly with some form of locking in the interpreter when needed
<simpson> On two different cores? Or by SWAR'ing multiple instructions into single operations?
<marky1991> on two different cores, yes
<simpson> Sure, RPython will give you a GIL automatically for the core structures, AIUI, so you don't need to do extra locking around the JIT if you use threads.
<cfbolz> nope, rpython does not give you a gil. you need to implement one in your interepreter if you need that
<cfbolz> true concurrency is not easy with rpython
<cfbolz> the GC isn't concurrent, nor is the JIT really thread-safe
<marky1991> hmm
<marky1991> so then I guess it really basically has to behave mostly like python then
<marky1991> that's what I figured mostly
<simpson> cfbolz: Aha, thanks.
<cfbolz> yeah, sorry
<simpson> marky1991: What's an example language runtime that does what you want, for comparison? I'm still puzzling over "true" here.
<simpson> cfbolz: No, no, that's really important. I was misled and was about to mislead others. Thanks.
tsaka__ has quit [Ping timeout: 260 seconds]
<marky1991> I don't know of one really, but basically a python with no GIL
<marky1991> C obviously, but that's way lower level
<marky1991> I guess java is a better match actually
<simpson> Java trades Python's GIL for a bunch of fine-grained locks, IIUC. It can be made fast, but isn't automatically fast.
<marky1991> right
<marky1991> I was wondering if I could do something like that in rpython
<marky1991> I don't presently expose threads in my language, so that would be step one, now that I think about it
<marky1991> but exposing threads in a language like python still seems marginally useful to me
<marky1991> only marginally useful I mean
<simpson> Probably. It costs sanity, though. To bring over a meme from #erights, pick two of three: Sequential execution with mutable state, blocking synchronous I/O during execution, or sanity.
<simpson> Python picks (1) and (2), just like C and Java. An example (2) and (3) system would be Erlang. (1) and (3) systems are like JS in the browser, or E (or my language Monte!)
<simpson> In a (1) and (2) system, true concurrency is multicore with threads, just like you already know. In (2) and (3) systems, true concurrency is still multicore, but programs are broken up into many small pieces and don't read top-to-bottom.
<simpson> And in (1) and (3) systems, multiprocessing is much easier than multicore, but it's still possible to be multicore by breaking up computational domains somehow. Monte and E call them "vats", but you probably already know browsers having "tabs".
<simpson> (This is why "true concurrency" is such a slippery phrase. Doing two things at once is easy when you have two cores, but the structure of the system determines how you structure doing those things.)
<bbot2> Failure: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8315 [mattip: force build, py3.7]
<bbot2> Failure: http://buildbot.pypy.org/builders/own-win-x86-32/builds/2433 [mattip: force build, py3.7]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5470 [mattip: force build, py3.7]
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5471 [mattip: force build, py3.7]
tsaka__ has joined #pypy
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5471 [mattip: force build, py3.7]
jcea has joined #pypy
YannickJadoul has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5472 [mattip: force build, py3.7]
tsaka__ has quit [Ping timeout: 240 seconds]
epsilonKNOT has joined #pypy
<nanonyme> mattip, yes, that looks exactly what I was after. Thanks
marky1991 has quit [Ping timeout: 240 seconds]
marky1991 has joined #pypy
tsaka__ has joined #pypy
marky1991 has quit [Ping timeout: 265 seconds]
tsaka__ has quit [Ping timeout: 272 seconds]
Taggnostr has quit [Ping timeout: 244 seconds]
Taggnostr has joined #pypy
Dejan has quit [Quit: Leaving]
<bbot2> Exception: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5472 [mattip: force build, py3.7]
fling has joined #pypy
fling has quit [Read error: Connection reset by peer]
fling has joined #pypy
fling has quit [Remote host closed the connection]
YannickJadoul has quit [Quit: Leaving]
otisolsen70_ has joined #pypy
otisolsen70 has quit [Ping timeout: 256 seconds]
rfgpfeiffer has quit [Ping timeout: 260 seconds]
rfgpfeiffer has joined #pypy
epsilonKNOT has quit [Ping timeout: 240 seconds]
tsaka__ has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5473 [mattip: force build, py3.7]
tsaka__ has quit [Ping timeout: 240 seconds]
rfgpfeiffer has quit [Ping timeout: 260 seconds]
rfgpfeiffer has joined #pypy
07EAAMG61 has quit [Remote host closed the connection]
lazka has quit [Quit: bye]
marvin has joined #pypy
lazka has joined #pypy
epsilonKNOT has joined #pypy
marvin has quit [Remote host closed the connection]
marvin_ has joined #pypy
wilbowma has joined #pypy
<wilbowma> I'm trying to move all my long-running python programs to PyPy, but noticed a strange performance behavior on one. The ananicy daemon ( [ WGH ]
<wilbowma> https://github.com/Nefelim4ag/Ananicy) spend a lot of time asleep and doesn't use much CPU or memory in python3, but uses a ton of CPU time in PyPy and 4x memory.
<wilbowma> I expected the memory increase but not the CPU increase. Anyone know why? (sorry for the copy/paste error)
jacob22 has quit [Read error: Connection reset by peer]
jacob22 has joined #pypy
rfgpfeiffer has quit [Ping timeout: 260 seconds]
otisolsen70_ has quit [Quit: Leaving]
<Hodgestar> wilbowma: That seems like very odd behaviour for a small Python program that is supposed to mainly sleep. Do you know which bit of it is spending all the CPU time in?
<wilbowma> No. I'm not familiar with the code base and haven't yet tried to debug further, thinking it might be a known limitation or some kind.
<fijal> there are lots of known limitations, most complex software that exists have those
<fijal> which *one* of the known limitations you possibly could have hit (or a new one) is something we can't guess
<wilbowma> Right :) I'll see if I can debug a bit
<Hodgestar> wilbowma: I'm going to make a complete guess that https://github.com/Nefelim4ag/Ananicy/blob/master/ananicy.py#L207-L208 is a good place to start looking.
RemoteFox has joined #pypy
RemoteFox has left #pypy [#pypy]
<wilbowma> Ran vmprof, and the following stand out in both python3 and pypy3
<wilbowma> 95.3% .... run 95.3% /usr/bin/ananicy:727
<wilbowma> 64.6% ...... proc_map_update 67.8% /usr/bin/ananicy:569
<wilbowma> 24.1% ........ proc_get_pids 37.2% /usr/bin/ananicy:534
<wilbowma> `run` is the main loop and spends most of its time sleeping, so that makes some sense. Given that the results are the same, this doesn't tell me much.
<wilbowma> Actually, in python3 the results differ in perhaps a significant way:
<wilbowma> 98.4% .... run 98.8% /usr/bin/ananicy:727
<wilbowma> 93.1% ...... proc_map_update 94.7% /usr/bin/ananicy:569
<wilbowma> 26.2% ........ proc_get_pids 28.1% /usr/bin/ananicy:534
<bbot2> Exception: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/5473 [mattip: force build, py3.7]