arigato changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | mac OS and Fedora are not Windows
ebarrett has quit [Ping timeout: 258 seconds]
Hornwitser has quit [Remote host closed the connection]
Ai9zO5AP has quit [Quit: WeeChat 2.4]
xcm has quit [Ping timeout: 244 seconds]
xcm has joined #pypy
iko has quit [Ping timeout: 272 seconds]
inhahe_ has quit []
inhahe has joined #pypy
iko has joined #pypy
xcm has quit [Read error: Connection reset by peer]
xcm has joined #pypy
inhahe has quit []
jcea has quit [Quit: jcea]
inhahe has joined #pypy
inhahe has quit []
inhahe has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
dddddd has quit [Remote host closed the connection]
Ai9zO5AP has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
_whitelogger has joined #pypy
[Arfrever] has quit [Ping timeout: 268 seconds]
[Arfrever] has joined #pypy
inhahe has quit []
inhahe has joined #pypy
dannyvai has joined #pypy
dannywai has quit [Ping timeout: 250 seconds]
inhahe has quit []
inhahe has joined #pypy
_whitelogger has joined #pypy
PileOfDirt has joined #pypy
antocuni has joined #pypy
inhahe has quit []
inhahe has joined #pypy
forgottenone has joined #pypy
forgottenone has quit [Quit: Konversation terminated!]
<cfbolz> I have some experimental success to use the chrome tracing tools as a vmprof frontend
<cfbolz> if you open this in chrome://tracing you can see an example https://usercontent.irccloud-cdn.com/file/Y7eAkm6y/out.json
<cfbolz> output looks like this:
antocuni has quit [Ping timeout: 268 seconds]
<cfbolz> note that those aren't flame graphs, they are flame *charts*. the x-axis is really wall clock time
forgottenone has joined #pypy
lritter has joined #pypy
<mattip> is it me or is that upside down? Can you zoom in to the "tips" of the flame to get more info?
<mattip> cool that it works in chrome. How do you get from our log file to the json?
jcea has joined #pypy
PileOfDirt has quit [Remote host closed the connection]
<cfbolz> mattip: it's a new exporter in vmprof, basically
<cfbolz> mattip: why upside down? stacks tend to grow downwards
Zaab1t has joined #pypy
Zaab1t has quit [Remote host closed the connection]
<mattip> these http://www.brendangregg.com/flamegraphs.html all have the "tips" pointing up
forgottenone has quit [Remote host closed the connection]
<cfbolz> mattip: ok, but those are flame graphs, not charts
<cfbolz> mattip: anyway, I don't have any influence on the direction
forgottenone has joined #pypy
<mattip> right
dddddd has joined #pypy
antocuni has joined #pypy
_whitelogger has joined #pypy
mattip has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
<phlebas> cfbolz: that's pretty cool!
Rhy0lite has joined #pypy
<cfbolz> phlebas: just getting started
<phlebas> also, that "stacks tend to grow downwards" is true, but until now it's never been how I would intuitively draw it - I guess based on the debuggers I used
xcm has joined #pypy
<antocuni> cfbolz: awesome!
<antocuni> cfbolz: what are the theee sections under the column "memory"?
<antocuni> three different threads?
<cfbolz> antocuni: yes
<antocuni> the only thing I don't like is that it doesn't seem possible to get a "summary" view, i.e. a flame graph for the whole execution
<cfbolz> antocuni: yes, there are other tools that take the same file as input for that
<antocuni> ah great
<antocuni> inside chrome as well, or external?
<cfbolz> antocuni: I haven't quite understood it
<cfbolz> It seems Chrome has two versions of this tool, with non overlapping features
<cfbolz> If you open the dev tools and go to the right tab, you can also open this view
<cfbolz> This file
<antocuni> and what is the right tab?
<cfbolz> Is there performance or profiling or something (I'm on the phone right now)
<antocuni> ok, found it. It is "performance", but I could't find the right button to open the file
<antocuni> (it's a icon with an arrow up)
<cfbolz> Obviously
<cfbolz> Anyway, right now this is CPython Linux only, will try to get to work on PyPy too
<antocuni> it's still very cool
<cfbolz> antocuni: the very thin lines at the bottom of the stacks are the samples, btw
<cfbolz> antocuni: do you happen to know why memory tracking doesn't work under PyPy?
<antocuni> so basically the thin lines represent an actual sampling and the bigger boxes above them are statistically inferred?
marky1991 has joined #pypy
<antocuni> no idea about memory tracking, I don't even know how it is implemented in CPython
<antocuni> *for CPython
marky1991 has quit [Remote host closed the connection]
<cfbolz> antocuni: yes the boxes are inferred
<cfbolz> I have plans to improve how they are inferred
<cfbolz> But one after the other
<antocuni> ah, it's your tool which does it? I thought it was chrome itself
<cfbolz> It's only an approximation, there might be a lot of calls that aren't shown because they are never hit by a sample
<cfbolz> No, Chrome's support for a sampling profiler sucks
<antocuni> I see
marky1991 has joined #pypy
<antocuni> cfbolz: also, do you have any control on the colors?
marky1991 has quit [Remote host closed the connection]
marky1991 has joined #pypy
<lesshaste> is there a pypy roadmap?
<cfbolz> antocuni: I haven't found a way to control the colors yet
<antocuni> I think that one nice feature of the current vmprof is the fact that it can tell you how much time is spent inside jitted vs interpreted code (and in theory also gc and warmup, although I don't think it is working correctly)
<antocuni> probably chrome cannot do it natively, but it is maybe possible to simulate the behavior by putting two boxes below each python function
<antocuni> like, I have a box "py:foo", and below a box "py:foo[jit]" which takes 70% of the parent, and another "py:foo[interp]" which takes 30%
<cfbolz> antocuni: yes, will try to get PyPy support
speeder39_ has joined #pypy
<cfbolz> I'd also like to trace all system calls, but haven't quite figured out how
<antocuni> so it means modifying the vmprof backend itself?
<cfbolz> antocuni: yes, a bit
<cfbolz> It can be done without a modification, but then the results are very coarse
Zaab1t has joined #pypy
<antocuni> how?
<cfbolz> antocuni: right now we have the assumption that all the samples are really the sample rate apart.
<cfbolz> But that's not really true in practice given that you can call sleep or some external process
<cfbolz> So the change in the back end is simply to record the current time in every sample
<cfbolz> It would be possible to also use that to make the regular precision better
jcea has quit [Remote host closed the connection]
<antocuni> cfbolz: I think that SIGPROF is designed exactly for that: it is delivered every N usec of CPU time, so the time spent in syscalls is not counted
<antocuni> but in vmprof you can also choose to use "real_time=True" which uses wall clock time
<cfbolz> antocuni: yes, but real time breaks a lot of code
<cfbolz> Eg it shortens sleeps
<antocuni> ok, then I am not sure to understand what problem you are trying to solve
<antocuni> if I am interested in profiling my CPU usage, I want the behavior of sigprof; in particular, I don't want to record the time spent in waiting for I/O
<cfbolz> antocuni: I disagree. Eg in real time mode i can't run translation
<cfbolz> But I would still like to know which subprocesses take ages
<antocuni> ok, I see
<antocuni> so basically you want a real_time mode which works :)
<antocuni> what is the problem with real_time mode, btw?
<cfbolz> antocuni: it interrupts syscalls, and a lot of them aren't resumed properly
<antocuni> ok
<antocuni> I am surprised, I have used it extensively in gambit and didn't run into any problem; but I was using an eventlet-based system, so the only syscall which is supposed to be blocking is poll
<cfbolz> Right
<cfbolz> Anyway, as I said, this is mostly orthogonal to the chrome support
<antocuni> sure
Zaab1t has quit [Quit: bye bye friends]
<cfbolz> antocuni: I wonder whether we can even show all eventlets
Zaab1t has joined #pypy
antocuni has quit [Ping timeout: 258 seconds]
<kenaan> mattip py3.6 41fb5a04a33e /pypy/module/posix/test/test_posix2.py: fix win32 only test, add check for bytes
jcea has joined #pypy
mattip has joined #pypy
antocuni has joined #pypy
<mattip> any thoughts on the cffi-libs branch? It backports cffi-based _ssl and _hashlib to python2.7
<mattip> which makes us link to less stuff, and hopefully eases the maintenance burden on ssl
<mattip> I would like to release a PyPy 8 soon (we need to update the major number for cpyext changes),
<mattip> since a few user-facing issues should be fixed (thread deadlock, os.rename, recursion limits)
<antocuni> mattip: sorry, I have to run away in like 30 second, I just drop here this quick thought about PyPy 8
<antocuni> what about starting a model in which we keep the binary ABI tag constant between major releases?
<antocuni> this way people won't have to build tens of wheels for PyPy, just one for PyPy 7, one for PyPy 8, etc.
<antocuni> of course we would need to add a test to automatically check that we change the ABI when we do changes e.g. to cpyext
<antocuni> alternatively, we could keep it constant for minor release, so that wheels for 8.1.1 work also for 8.1 or so
<mattip> +1 for 8.1.1, 8.1.2 not needing recompile
<mattip> which is the pattern we have today IMO
speeder39_ has quit [Quit: Connection closed for inactivity]
<mattip> but we *did* touch the header files since 7.1.1, so we need to rebuild wheels now
antocuni has quit [Ping timeout: 244 seconds]
antocuni has joined #pypy
antocuni has quit [Remote host closed the connection]
[Arfrever] has quit [Ping timeout: 248 seconds]
[Arfrever] has joined #pypy
Zaab1t has quit [Quit: bye bye friends]
jcea has quit [Ping timeout: 250 seconds]
jcea has joined #pypy
marky1991 has quit [Ping timeout: 244 seconds]
Rhy0lite has quit [Quit: Leaving]
<cfbolz> mattip: fwiw, I like the cffi-libs branch
jacob22_ has joined #pypy
jacob22 has quit [Read error: Connection reset by peer]
jacob22_ has quit [Client Quit]
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
jacob22 has joined #pypy
marky1991 has joined #pypy
Garen has quit [Ping timeout: 245 seconds]
Garen has joined #pypy
Garen has quit [Ping timeout: 245 seconds]
Garen has joined #pypy
marky1991 has quit [Ping timeout: 272 seconds]
PileOfDirt has joined #pypy
marky1991 has joined #pypy
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
marky1991 has quit [Ping timeout: 268 seconds]
lritter has quit [Ping timeout: 246 seconds]
forgottenone has quit [Quit: Konversation terminated!]
ssbr has quit [Ping timeout: 250 seconds]
ssbr has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
antocuni has joined #pypy
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
jacob22_ has joined #pypy
jacob22 has quit [Read error: Connection reset by peer]
ssbr has quit [Ping timeout: 258 seconds]
ssbr has joined #pypy
ssbr has quit [Ping timeout: 252 seconds]
ssbr has joined #pypy
antocuni has quit [Ping timeout: 258 seconds]