cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
adamholmberg has joined #pypy
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
jacob22_ has quit [Read error: Connection reset by peer]
jacob22_ has joined #pypy
jacob22_ has quit [Read error: Connection reset by peer]
jacob22_ has joined #pypy
adamholmberg has quit [Remote host closed the connection]
jvesely has quit [Ping timeout: 255 seconds]
jvesely has joined #pypy
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
jcea has quit [Quit: jcea]
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
dddddd has quit [Read error: Connection reset by peer]
wleslie has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
adamholmberg has quit [Read error: Connection reset by peer]
tos9 has quit [Ping timeout: 260 seconds]
tos9 has joined #pypy
adamholmberg has joined #pypy
adamholmberg has quit [Ping timeout: 255 seconds]
jacob22_ has quit [Read error: Connection reset by peer]
jacob22_ has joined #pypy
ronan has joined #pypy
riddle has quit [Ping timeout: 240 seconds]
riddle has joined #pypy
ronan has quit [Ping timeout: 240 seconds]
tsaka_ has quit [Ping timeout: 255 seconds]
oberstet has joined #pypy
tsaka_ has joined #pypy
wleslie has quit [Ping timeout: 260 seconds]
ronan has joined #pypy
rubdos has joined #pypy
rubdos has quit [Client Quit]
ronan has quit [Ping timeout: 240 seconds]
ronan has joined #pypy
rubdos has joined #pypy
jvesely has quit [Quit: jvesely]
tsaka_ has quit [Ping timeout: 258 seconds]
bitbit has quit [Quit: Leaving]
ronan has quit [Ping timeout: 258 seconds]
ronan has joined #pypy
ronan has quit [Ping timeout: 260 seconds]
tsaka_ has joined #pypy
tsaka_ has quit [Ping timeout: 258 seconds]
ronan has joined #pypy
lritter has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/rpython-linux-x86-64/builds/275 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6807 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<bbot2> Started: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8023 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
ronan has quit [Ping timeout: 240 seconds]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6807 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
ronan has joined #pypy
<ronan> yay, first merge request accepted on Heptapod!
<marmoute> ☺
<cfbolz> cool :-)
<bbot2> Failure: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8023 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<bbot2> Started: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8024 [ronan: force build, py3.7]
ronan has quit [Remote host closed the connection]
ronan has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-32/builds/5727 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
tsaka_ has joined #pypy
<krono> will heptapod als be integrated with this channel?
ronan has quit [Remote host closed the connection]
ronan has joined #pypy
rubdos has quit [Quit: WeeChat 2.4]
<cfbolz> krono: if somebody gets to it, yet
<cfbolz> yes
<krono> :)
<bbot2> Failure: http://buildbot.pypy.org/builders/rpython-linux-x86-64/builds/275 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
mattip_ has joined #pypy
tsaka_ has quit [Ping timeout: 272 seconds]
mattip_ has quit [Remote host closed the connection]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-32/builds/5727 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<bbot2> Failure: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8024 [ronan: force build, py3.7]
tsaka_ has joined #pypy
jcea has joined #pypy
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6809 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
tsaka_ has quit [Ping timeout: 272 seconds]
Smigwell has joined #pypy
Rhy0lite has joined #pypy
adamholmberg has joined #pypy
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6809 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
plan_rich has quit [Ping timeout: 265 seconds]
ronan has quit [Ping timeout: 260 seconds]
<marmoute> I know there is a standard IRC solution for gitlab, it should just-work™ for heptapod.
<cfbolz> Ah, maybe we should try that indeed
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-32/builds/5728 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<bbot2> Started: http://buildbot.pypy.org/builders/rpython-linux-x86-64/builds/276 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
YannickJadoul has joined #pypy
marky1991 has quit [Ping timeout: 255 seconds]
<antocuni> I have a pypy process which seems to be leaking over time and I am having troubles to understand what's going on
<antocuni> this is the /proc/pid/maps file near the end of the execution, when the process consumed ~4GB of VmRSS: http://paste.openstack.org/show/789860/
<antocuni> I suppose that all/most of the anonymous pages are allocated by the GC
<bbot2> Failure: http://buildbot.pypy.org/builders/rpython-linux-x86-64/builds/276 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<antocuni> the question is: which objects go into which memory region?
marky1991 has joined #pypy
<antocuni> according to smaps, [heap] is ~3GB, while the region 7f1405614000-7f1440000000 is almost 1GB
<antocuni> I thought that all GC objects were allocated into mmapped regions, so I expected a smaller heap: should I conclude that the leak is in C code, then?
dddddd has joined #pypy
marky1991 has quit [Remote host closed the connection]
<antocuni> or maybe there are ways for GC objects to go into [heap]? I know that at some point there were conditions in which certain objects were allocated by calling malloc(), but I can't remember the details
<antocuni> arigato, cfbolz: if you are online, you are probably the best suited persons to answer this ^^^ :)
<arigato> yes:
<arigato> our GC is completely based on the C malloc() I think
<arigato> certainly it is for all large objects
<antocuni> didn't we call mmap to allocate GC regions?
<arigato> I tried, but gave up I think:
<arigato> the problem is that if you allocate a lot of memory both directly with mmap, and via malloc(), then there are more risks of "cannot reuse memory"
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-32/builds/5728 [Carl Friedrich Bolz-Tereick: force build, pypy-jitdriver-greenkeys]
<arigato> by calling malloc() even to allocate our fixed-size (512KB I think) chunks of memory, we get better utilization
<antocuni> I see
<antocuni> so, what are all these anonymous pages I see?
<arigato> there are a few JIT ranges (two I think, the ones with "rwxp" flags)
<arigato> the unreadable "---p" ranges, I have no clue
<antocuni> I'm mostly concerned by the first one, which accounts for ~1GB
<antocuni> so, IIUC there is a possibility that this is not pypy related
<antocuni> or, not directly related to pypy, at leas
<arigato> hard to tell. it might just be malloc()
<arigato> not just one, many calls
<antocuni> I assumed that malloc allocates memory in the [heap], but indeed it might be that it allocates things with mmap as well
<arigato> yes, I'm not too sure when it decides to do what, but malloc() definitely can allocate with mmap too
<antocuni> uhm, that's interesting as well. So, I have been dumping maps/smaps every minute for ~2hrs, and the one I showed was the last
<antocuni> the minute before, the ~1GB mmap region did not exists, but in place there are two separate regions, one ~730MB and the other ~193MB
<antocuni> so I can guess that something happened in between and the libc and/or the kernel decided to merge the two, or something like that
<arigato> yes, the kernel merges adjacent regions with the same flags
<arigato> what is in the middle just before?
<arigato> I'm not finding anything in pypy that gives pages the "---p" flags
<arigato> it might be internal to malloc()
<antocuni> FWIW, this is the map/smaps of the process BEFORE the last dump: http://paste.openstack.org/show/789863/
<arigato> OK, no clue. contiguous ranges with the same flags
<antocuni> so, apparently if you malloc() something big enough, glibc calls mmap directly
ronan has joined #pypy
<antocuni> I suspect that our GC malloc()s large blocks, doesn't it?
<arigato> yes, sorry, I thought I said exactly this above
<antocuni> yes sure, I am mostly thinking aloud now :)
<arigato> our GC allocates chunks of 512KB, which are not large enough as far as I remember
<antocuni> ah, interesting
<antocuni> but e.g., if I allocate a numpy array which is big enough, it probably ends up there
* antocuni tries
<antocuni> yes, it seems so
<antocuni> although all of this doesn't help much to find the cause of my problem :(
inhahe_ has joined #pypy
<antocuni> what I DO know is that the GC hooks reported ~2GB of memory used by the GC, so either our GC is buggy or the leak is somewhere else
inhahe has quit [Ping timeout: 240 seconds]
ronan has quit [Ping timeout: 260 seconds]
tsaka_ has joined #pypy
marky1991 has joined #pypy
tsaka_ has quit [Remote host closed the connection]
tsaka_ has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
tsaka_ has quit [Ping timeout: 255 seconds]
YannickJadoul has quit [Quit: Leaving]
tsaka__ has joined #pypy
marky1991 has quit [Ping timeout: 240 seconds]
marky1991 has joined #pypy
tsaka__ has quit [Read error: Connection reset by peer]
tsaka_ has joined #pypy
bitbit has joined #pypy
_whitelogger has quit [Ping timeout: 248 seconds]
_whitelogger has joined #pypy
tsaka_ has quit [Ping timeout: 260 seconds]
<arigato> antocuni: no, 2GB reported for the GC will consume at least 3GB
marky1991 has joined #pypy
<arigato> I think
<arigato> the 2GB is the size of objects found alive right now, no?
<arigato> I guess it depends
<arigato> on the exact way you ask
salotz[m] has quit [Ping timeout: 248 seconds]
the_drow[m] has quit [Ping timeout: 245 seconds]
slavfox has quit [Ping timeout: 272 seconds]
<antocuni> arigato: I'm keeping two separate stats: the "total_memory_used" which is reported at the end of each minor collection, and "arenas_bytes" which is reported at the end of a major
<antocuni> (see pypy/module/gc/hook.py:LowLevelGcHooks)
<antocuni> so, gc.minor.total_memory_used should be the actual memory occupied by the GC, while gc.major.arenas_bytes should the the size of the objects which are actually alive
<antocuni> and indeed, the ratio between the two is more or less the value of PYPY_GC_MAJOR_COLLECT, which I set to 1.4
tsaka_ has joined #pypy
the_drow[m] has joined #pypy
<antocuni> in this specific case I have:
<antocuni> - gc.minor.total_memory: ~2.1GB
<antocuni> - gc.major.arenas_bytes: ~1.3GB
<antocuni> - gc.major.raw_bytes: ~420MB
<antocuni> - [heap] in mmaps: ~3GB
<antocuni> - [anon] pages in mmaps: ~1GB
marky1991 has quit [Ping timeout: 240 seconds]
<antocuni> but note that some minutes before the crash, gc.minor.total was at ~2.9GB; so I think that what happened was that the GC malloc()ed a total of ~3GB, which is what I see in [heap]; then later some of the memory was free()d, but it was not released back to the OS, that's why it's still 3GB in [heap]
plan_rich has joined #pypy
plan_rich has quit [Client Quit]
plan_rich has joined #pypy
plan_rich has quit [Client Quit]
the_drow[m] has quit [Quit: killed]
plan_rich has joined #pypy
plan_rich has quit [Client Quit]
plan_rich has joined #pypy
plan_rich has quit [Client Quit]
xcm has quit [Read error: Connection reset by peer]
xcm has joined #pypy
tsaka_ has quit [Ping timeout: 255 seconds]
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
slavfox has joined #pypy
plan_rich has joined #pypy
plan_rich has quit [Client Quit]
xcm is now known as Guest65572
Guest65572 has quit [Remote host closed the connection]
xcm has joined #pypy
ronan has joined #pypy
<pjenvey> antocuni: FYI there is a gc._get_stats now (sounds like you're tracking things yourself? maybe you can add more stats there if they're not already)
<pjenvey> antocuni: I'd take a look at what malloc_info reports for (assuming glibc malloc)
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
salotz[m] has joined #pypy
the_drow[m] has joined #pypy
plan_rich has joined #pypy
ronan has quit [Ping timeout: 240 seconds]
plan_rich has quit [Quit: plan_rich]
plan_rich has joined #pypy
jacob22_ has quit [Read error: Connection reset by peer]
jacob22_ has joined #pypy
plan_rich has quit [Quit: plan_rich]
Rhy0lite has quit [Quit: Leaving]
plan_rich has joined #pypy
xcm has quit [Remote host closed the connection]
plan_rich has quit [Quit: plan_rich]
ronan has joined #pypy
xcm has joined #pypy
plan_rich has joined #pypy
plan_rich has quit [Quit: plan_rich]
lritter has quit [Quit: Leaving]
oberstet has quit [Remote host closed the connection]
Ai9zO5AP has quit [Ping timeout: 240 seconds]
Ai9zO5AP has joined #pypy
abrown__ has quit [Remote host closed the connection]
abrown has joined #pypy
marky1991 has joined #pypy
zmt00 has quit [Quit: Leaving]
zmt00 has joined #pypy
ronan has quit [Ping timeout: 265 seconds]
Smigwell has left #pypy [#pypy]
ronan has joined #pypy
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
rich__ has joined #pypy
adamholmberg has quit [Remote host closed the connection]
jvesely has joined #pypy
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
tos9_ has joined #pypy
tos9 has quit [Ping timeout: 272 seconds]
tos9_ is now known as tos9
marky1991 has quit [Read error: Connection reset by peer]
marky1991 has joined #pypy
ronan has quit [Ping timeout: 255 seconds]
marky1991 has quit [Ping timeout: 265 seconds]
YannickJadoul has joined #pypy
YannickJadoul has quit [Client Quit]
adamholmberg has joined #pypy
adamholmberg has quit [Remote host closed the connection]