arigato changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | mac OS and Fedora are not Windows
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
jcea has quit [Quit: jcea]
<mjacob> mattip_: re haptapod: in the moment it's not ready for pypy, mainly because it expects every branch to have a single head (including closed ones). i'll attend the mercurial conference on tuesday + the sprint on wednesday, and can ask for the current status of this missing feature.
mandeep has quit [Ping timeout: 272 seconds]
Ai9zO5AP has joined #pypy
forgottenone has joined #pypy
dddddd has quit [Remote host closed the connection]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4622 [Andy Lawrence: force build, winmultiprocessing]
<mattip_> mjacob: thanks for the update. Is there a way we can retroactively fix our repo?
<kenaan> mattip default 6dd0f9f9ab4e /: fixes for win32
<kenaan> mattip py3.6 6912df235ecc /: merge default into py3.6
<kenaan> mattip py3.6 f91bbed46ef7 /lib_pypy/_cffi_ssl/_stdssl/win32_extra.py: fix merge
<kenaan> mattip winmultiprocessing f46824e4d92c /pypy/doc/whatsnew-pypy3-head.rst: close branch to be merged
<kenaan> mattip py3.6 8bab4372bd45 /: merge winmultiprocessing which starts support for multiprocessing on win32
<mattip_> I merged winmultiprocessing since partially broken multiprocessing support is better than none
forgottenone has quit [Ping timeout: 248 seconds]
altendky has quit [Quit: Connection closed for inactivity]
<kenaan> mattip py3.6 b7cc70885b5f /pypy/module/_socket/test/test_sock_app.py: try to fix win32 _socket.share test
<cfbolz> mattip_: Yay
_whitelogger has joined #pypy
mattip_ is now known as mattip
<mattip> cfbolz: feel like taking a look at a pypyjit test failure?
<mattip> it seems there is now two added codes: int_sub and setfield_gc,
<mattip> which probably correspond to the added function in shadowstack-issue2722
PileOfDirt has joined #pypy
mandeep has joined #pypy
<mattip> ok, convinced myself, fixing
<kenaan> mattip default 597a4f90a1b1 /pypy/module/pypyjit/test_pypy_c/test_ffi.py: fix test for extra ops after shadowstack-issue2722
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4626 [mattip: force build, py3.6]
Ninpo has quit [Ping timeout: 248 seconds]
<arigato> mjacob, mattip: it's certainly possible to fix our repo so that every named branch has only one head, using dummy merges
zmt00 has quit [Read error: Connection reset by peer]
glyph has quit [Quit: End of line.]
glyph has joined #pypy
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4626 [mattip: force build, py3.6]
Ai9zO5AP has quit [Ping timeout: 272 seconds]
Ninpo has joined #pypy
Ai9zO5AP has joined #pypy
antocuni has joined #pypy
<Remi_M> Hey guys! I've been absent here for quite some time, but for a good reason: I finally defended my thesis on Friday 🎉️
<Remi_M> Of course I need to thank Armin in particular. Without Armin, PyPy-STM would not have happened. So a big thank you to Armin! :)
<Remi_M> Who knows, maybe I'll be around a bit more now :)
<Remi_M> I want to thank you all for the help I received and the fun times at the PyPy sprints! I'm deeply grateful :) .
<antocuni> Remi_M: wow, congratulations! 🎉
<Remi_M> Thanks! It's been over five years now, so it was about time ;)
dddddd has joined #pypy
<antocuni> what are you going to do now?
<xorAxAx> congratulations Remi_M
<Remi_M> at the moment, it looks like I may be joining some startup here in Zurich. But really I haven't made the decision yet...
mandeep has quit [Ping timeout: 258 seconds]
<arigato> Remi_M: congratulation!
<Remi_M> arigato: Thanks, I guess this finally marks the end of PyPy-STM for me as well, which I'm a bit sad about of course...
<mattip> I figured out why speed.pypy.org is not displaying graphs
<mattip> it has to do with "lastest" running on benchmarker, when we still want the graphs from tannit
<arigato> Remi_M: I'm not sure, but I tend to think that http://people.csail.mit.edu/sanchez/papers/2018.hotpads.micro.pdf gives a viable basis to transactional memory (provided such a redesign of the cache system in CPUs ever occurs)
<Remi_M> arigato: I don't think waiting for a new hardware design is the right approach :)
<arigato> well, just saying, imho the future of Python as a seriously parallelizable language rests on this kind of things
<mattip> the bad news is pypy2-HEAD is slower than cpython-2.7.11 on the bm_mdp and sphinx benchmarks
<antocuni> arigato: do you feel like to give a 30 second summary of why a new design would help? (I haven't read the paper)
<arigato> antocuni: sure, the paper is about changing the caches (L1, L2, L3...) so that instead of fixed blocks (like 64 bytes) it handles variable-size "objects", with GC references between them marked explicitly
<arigato> every level becomes a GC generation, too, so that when one level (e.g. L1) is full, it gets "collected" and only surviving objects gets moved to L2
<arigato> and in the end only whats gets collected out of the last level really gets an actual memory address
<arigato> in other words it's doing a standard GC approach but in the levels of caches for generations, and with hardware collecting between generations
<antocuni> so the GC collection would be basically done in hardware?
<arigato> yes, apart from the "major" collection of the memory itself
<antocuni> ok, but how does it help STM? I can somehow imagine how to implement transactions until all the objects stay in cache, but as soon as you do a write to a main-memory object you would still need an approach like the current, wouldn't you?
<arigato> what is interesting is that GC references are marked specially and can move (i.e. actually change) when objects move between cache levels
<antocuni> and they don't have a "real" address as long as they are in cache?
<kenaan> andrewjlawrence py3.6 9bf9185580c6 /pypy/module/_socket/test/test_sock_app.py: Fix test
<arigato> yes
<antocuni> interesting
<arigato> so we can much more control: it's never possible to directly "write" to main memory, because such a write first loads the object into the L1 and then the write happens in the L1
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4627 [Andy Lawrence: force build, py3.6]
<antocuni> aaah, I get it
<arigato> we only need to have some code that is invoked by the CPU when it pushes an object out of the last-level cache
<antocuni> so you get a "copy" of the object for free
<arigato> yes
<antocuni> now we just have to wait 20 years for it to be available
<arigato> we still need careful logic, but it's invoked only when the last-level cache is full, which occurs only for a small fraction of objects
<arigato> so it's OK if it's relatively slow
<antocuni> thanks for the summary :)
<arigato> :-)
lastmikoi has quit [Ping timeout: 244 seconds]
dannyvai has joined #pypy
dannywai has quit [Ping timeout: 268 seconds]
antocuni has quit [Ping timeout: 248 seconds]
lastmikoi has joined #pypy
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4627 [Andy Lawrence: force build, py3.6]
<kenaan> mattip default 129d1eff1693 /testrunner/get_info.py: the virtualenv issue was resolved so now win32 uses venv/Scripts/pypy-c.exe
<kenaan> mattip py3.6 3ddfc6aee731 /lib_pypy/_cffi_ssl/_stdssl/error.py: fix merge
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4628 [mattip: force build, py3.6]
Tsundere_cloud has joined #pypy
antocuni has joined #pypy
forgottenone has joined #pypy
jcea has joined #pypy
garetjax_ has joined #pypy
garetjax_ has quit [Client Quit]
<mattip> speed.pypy.org is back to showing graphs
<mattip> the next stage is to start using benchmarker and reconstruct historical results, this is what I have so fare
lritter has joined #pypy
<mattip> 5.3 is the earliest version that can still run the benchmark suite
_whitelogger has joined #pypy
<antocuni> mattip: thanks for the work you are doing on speed.pypy.org, it looks cool
forgottenone has quit [Quit: Konversation terminated!]
forgottenone has joined #pypy
oberstet has joined #pypy
<mattip> antocuni: note that our performance "went down" since I added sphinx and bm_mdb which we are really bad on
<mattip> did we decide not to show them off on the front page at some stage?
<antocuni> mattip: I don't really know/remembe
<antocuni> *remember
<mattip> even if I remove those two benchmarks, we are "only" 5 times faster than cpython in that bar graph
<mattip> probably no big deal. Should I switch the view to benchmarker?
<antocuni> well, it would be interesting to know why we became slower I suppose
Tsundere_cloud has quit [Quit: Connection closed for inactivity]
<mattip> I don't know if it is because the original baseline was somehow too slow, but that machine is long gone
<antocuni> ah, so you are saying that it's CPython to be faster, not pypy to be slower?
<mattip> these are also 64-bit results, so there are alot of variables to control for
<antocuni> right
<antocuni> and I am sure that different cache size has also an impact
<mattip> I guess the conclusion is, as always, benchmarks are inconsistant across machines/platforms/OS versions
<mattip> , what is important is the timeline changes
<antocuni> yes
<antocuni> by looking at the timelines, I don't see any obvious performance drop
<mattip> cpython is discussing a non-refcount garbage collector, if anyone has any opinions on what might work
oberstet has quit [Remote host closed the connection]
<antocuni> the biggest benefit is that people will stop complaining that their program don't work on pypy because of the GC :)
<kenaan> fijal default 06e38bb00f7e /rpython/tool/jitlogparser/parser.py: Ignore strange entries
<cfbolz> mattip: Yay! Yes, please switch to the new system on the front page
<cfbolz> 5x sounds perfectly fine ;-)
<mattip> ok, switching
<kenaan> mattip py3.6 5c4489ede2be /lib_pypy/_cffi_ssl/_stdssl/__init__.py: fix merge
altendky has joined #pypy
dayton has joined #pypy
<dayton> Hi, is there a special way to declare variadic functions in cffi?
<dayton> When I write the declaration like I would in a c header file. The wrapper builds fine but at import time in python, it throws an error about missing symbols for the variadic function
antocuni has quit [Ping timeout: 245 seconds]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-win-x86-32/builds/4628 [mattip: force build, py3.6]
<bbot2> Started: http://buildbot.pypy.org/builders/jit-benchmark-linux-x86-64/builds/2607 [mattip: force build, 8d40471a2203]
zmt00 has joined #pypy
asmeurer has joined #pypy
alawrence has joined #pypy
<_aegis_> I have a weird workflow where I call a pypy function during a crash to dump all python threads
<_aegis_> but I've run into a case where the pypy lock is held during the crash and the app just hangs
<_aegis_> (I think maybe the crash happened in a pypy thread, possibly a pypy -> cffi -> C call)
<_aegis_> is there something I can do about this? I'd be fine not dumping the thread state in this case
<mattip> what would you like to happen?
asmeurer has quit [Quit: asmeurer]
EWDurbin has quit [Ping timeout: 276 seconds]
graingert has quit [Ping timeout: 252 seconds]
antocuni has joined #pypy
DRMacIver has quit [Ping timeout: 252 seconds]
cadr_ has quit [Ping timeout: 276 seconds]
dayton has quit [Ping timeout: 252 seconds]
asmeurer_ has joined #pypy
<alawrence> mattip: Do we have an equivalent of PyUnicode_FSDecoder in PyPy?
azrdev has joined #pypy
<azrdev> hi! I have a scikit-learn extension which I want(ed) to try on pypy (possibly to optimize for execution speed), but seems like scipy and by extension sklearn are currently broken on pypy3-7.x
<bbot2> Success: http://buildbot.pypy.org/builders/jit-benchmark-linux-x86-64/builds/2607 [mattip: force build, 8d40471a2203]
<azrdev> http://packages.pypy.org/##scipy complains about the libraries missing, but I instead "ImportError: [...]venv/site-packages/numpy/core/_multiarray_umath.pypy3-71-x86_64-linux-gnu.so: undefined symbol: PyStructSequence_InitType2"
dayton has joined #pypy
<azrdev> File "/home/azrael/dev/pypy/env/site-packages/numpy/core/overrides.py", line 6, in <module>
<azrdev> from numpy.core._multiarray_umath import (
EWDurbin has joined #pypy
graingert has joined #pypy
DRMacIver has joined #pypy
cadr_ has joined #pypy
<mattip> azrdev: you need to use a HEAD of pypy3.6
<mattip> or numpy<1.16.3
<_aegis_> dunno, I either want pypy to not hang, or to be able to timeout the lock or something on the call from C -> pypy during the crash
<_aegis_> (if the signal handler can't get the lock within say 250ms I want to just give up)
<_aegis_> I assume I'd have the same hang if I called sigaction from within pypy
<_aegis_> (right now I'm calling it from C)
<mattip> alawrence: we have interpreter.unocidehelper.fsdecode()
<mattip> which calls runicode.str_decode_mbcs
<azrdev> mattip: this is pypy3 from archlinux packages, calling itself Python 3.6.1 (784b254d669919c872a505b807db8462b6140973 / PyPy 7.1.1-beta0
<mattip> so `pip install numpy==1.16.2` should work
<mattip> before `pip install scipy`
rindolf has joined #pypy
<azrdev> just trying that
<rindolf> hi all! how can i fix this vmprof failure - https://www.shlomifish.org/Files/files/text/vmprof.txt
<mattip> sorry, vmprof.com is broken and unlikely to be revived. We are working on an offline viewer
<rindolf> mattip: ah
<rindolf> mattip: is there a pre release?
<mattip> cfbolz: ^^^
<rindolf> mattip: https://vmprof.readthedocs.io/en/latest/vmprof.html - this should be updated
<mattip> maybe you can host your own server, it is a django app from vmprof-server
<rindolf> mattip: i can try
<mattip> yup
<rindolf> mattip: thanks
<mattip> then "pypy -mvmprof ... --web-url ...
<mattip> to send the data to a different server
<mattip> btw, if you want to use a HEAD nightly build, I think `(cd lib_pypy; ../bin/pypy-c _ssl_build.py)` should be fixed for non-ubuntu distros
<mattip> on pypy2
<rindolf> mattip: i'm on py 3.7 and the pip command fails to install PYYAML
<cfbolz> I don't know about the status of the web frontend, more a question for fijal
<fijal> I don't think it works
<fijal> I struggled today
<fijal> like, it worked, then it stopped
<fijal> maybe we should fix all the messes....
<mattip> one at a time. speed.pypy.org is now functional
<rindolf> mattip: i got the web service to work but only see pypy internal funcs
<mattip> rindolf: I defer to the experts, I don't use vmprof myself
<rindolf> mattip: ah
<rindolf> mattip: what do you use instead?
<mattip> I am not profiling these days
<rindolf> mattip: ah
<fijal> I use vmprof and it's a mess
<rindolf> fijal: i see
<fijal> rindolf: I don't know, but I hope to fix something about it in the next few months
<fijal> don't hold your breath though
<rindolf> fijal: ah
<rindolf> fijal: is there a functional alternative?
<azrdev> mattip: thx, seems I now got a working pypy3.6 + scipy 1.3.0 + numpy 1.16.2 + scikit-learn 0.21.2
azrdev has left #pypy ["WeeChat 2.4"]
<fijal> rindolf: vmrpofshow is sometimes useful
<rindolf> fijal: i am getting <unknown code>
forgottenone has quit [Ping timeout: 258 seconds]
<mattip> alawrence: looking at some of the win32 py3.6 lib-python failures, it seems this one turns up alot
<mattip> UnicodeEncodeError: 'mbcs' codec can't encode characters in position 0--1: mbcs encoding does not support errors='surrogateescape'
<alawrence> mattip: What does that mean? Should this mbcs codec support a surrogate escape?
<mattip> no, it seems there is some fun and games around sys.getfielsystemencoding(), sys.getfilesystemencodeerrors()
<mattip> they are different on windows and linux, but pypy does not quite follow cpython
forgottenone has joined #pypy
<mattip> got it. In cpython/Python/pylifecycle.c, there is code to initialize the filesystemDefaultEncoding and filesystemDefaultEncodeErrors
<mattip> which is very different in windows and linux
<mattip> we only do part of this in module/sys/interp_encoding.py, and in module/sys/initpath.py
<mattip> the pypy_initfsencoding() should look more like cpython's initfsencoding, with a legacyFSEncodingFlag and more
<mattip> gnite
rindolf has quit [Ping timeout: 246 seconds]
rindolf has joined #pypy
forgottenone has quit [Quit: Konversation terminated!]
lritter has quit [Quit: Leaving]
dayton has quit [Quit: Connection closed for inactivity]
dddddd has quit [Ping timeout: 272 seconds]
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
Ai9zO5AP has quit [Quit: WeeChat 2.4]
<kenaan> andrewjlawrence winconsoleio 01c16b67cba4 /pypy/module/_io/: Started implementing winconsoleio
<kenaan> andrewjlawrence winconsoleio d1f9ab0fd01b /pypy/module/_io/__init__.py: Add winconsoleio to init
<kenaan> andrewjlawrence winconsoleio 617ba4ae9ec9 /: Merged py3.6
<kenaan> andrewjlawrence winconsoleio c795c20ce4b8 /: Initial implementation of winconsoleio
dddddd has joined #pypy
<kenaan> rlamy optimizeopt-cleanup 5e0d762fe4fd /rpython/jit/metainterp/: Don't fallback to jumping to the preamble in compile_retrace(), since optimize_peeled_loop() already ta...
<kenaan> rlamy optimizeopt-cleanup 3df8ad2225a4 /rpython/jit/metainterp/: Don't pass the optimizer around unnecessarily
alawrence has quit [Ping timeout: 256 seconds]
altendky has quit [Quit: Connection closed for inactivity]
kipras has joined #pypy
Kipras_ has quit [Ping timeout: 246 seconds]
mt[m] has left #pypy [#pypy]