cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
jacob22 has quit [Read error: Connection reset by peer]
jacob22 has joined #pypy
xcm is now known as Guest51059
Guest51059 has quit [Killed (verne.freenode.net (Nickname regained by services))]
xcm has joined #pypy
dansan has quit [Excess Flood]
jcea has quit [Quit: jcea]
dansan has joined #pypy
oberstet has joined #pypy
Ai9zO5AP has quit [Remote host closed the connection]
ionelmc has joined #pypy
oberstet has quit [Remote host closed the connection]
oberstet has joined #pypy
xcm is now known as Guest10594
Guest10594 has quit [Killed (karatkievich.freenode.net (Nickname regained by services))]
xcm has joined #pypy
<bbot2> Started: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8140 [mattip: test branch, cpyext-multiple-inheritance]
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6954 [mattip: test branch, cpyext-multiple-inheritance]
<bbot2> Failure: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/8140 [mattip: test branch, cpyext-multiple-inheritance]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6954 [mattip: test branch, cpyext-multiple-inheritance]
<bbot2> Started: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6955 [mattip: broken nightly broke numpy CI]
<mattip> whoops. NumPy is using nightlies, and last night's messed up a numpy use of as_sequence.c_sq_item
<mattip> I added a PR to use the official release instead
<mattip> we should test some c-extension library ourselves once a week or so
<antocuni> mattip: is numpy using pypy nightlies?
<mattip> yes, for CI with pypy3.6 linux64. But not after today
<mattip> it is pretty easy, there is a script in the numpy repo under tools/pypy-test.sh
<antocuni> that's very cool
<antocuni> why not after today?
<mattip> it will still test with PyPy, will replace a nightly with a release
<antocuni> ah ok, makes sense
<mattip> I am gently trying to push for an official pypy wheel
<antocuni> horray
<mattip> together with wheels for aarch64, s390x, ppc64le, but it is stuck on manylinux problems with old glibc for the manylinux2014 standard
<mattip> glibc 2.17 has buggy exp routines, and the compiler in aarch64 has a bug with copysign that was fixed upstream but not released for centos7
<antocuni> ouch. Is there a way to fix manylinux where these kind of issues arises?
<mattip> yes, pep 600. But now I need to push PEP600 into manylinux, packaging
<mattip> here is my attempt to make the wheels
<antocuni> wonderful
<antocuni> official numpy wheels for pypy would a game changer, and hopefully other projects would follow
<mattip> even better would be if it was fast, hpy will be the real game changer
<antocuni> true
tsaka__ has joined #pypy
lritter has joined #pypy
jacob22 has quit [Quit: Konversation terminated!]
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-x86-64/builds/6955 [mattip: broken nightly broke numpy CI]
<antocuni> arigo, cfbolz: looking both at the generated C code and at exceptiontransform.py, it seems that to check whether an exception has occurred after a call, we always do "if exc_data.exc_type is not NULL". Have we ever tried to do as CPython does and check the return value of the function instead?
tsaka__ has quit [Ping timeout: 265 seconds]
<antocuni> intuitively, the return value should be in a register and thus very quick to check, while checking exc_data requires reading memory (although probably in cache?)
<antocuni> but I would be surprised if nobody ever tried that
tsaka__ has joined #pypy
<arigo> antocuni: we don't have any special value to mean "an exception definitely occurred", though, for none of the types
<antocuni> uhm, true. Although with enough effort we could probably have one, at least for the common case in which we return RPython instances (which are never NULL, I suppose)
<arigo> they can be NULL (None in RPython)
<antocuni> ah
<antocuni> so, the question becomes: why did we decide to do that, instead of doing as CPython? Was it a deliberate decision or it just happened?
<antocuni> doing as CPython would mean to have a special value for the RPython None
<antocuni> or we could return 0xffffffffffffffff instead of NULL, or things like that
<arigo> we never tried, I guess. I'm sure there are annoying issues all along, though
<arigo> for example, many functions return void
<fijal> what would be the point?
<antocuni> fijal: speed?
<fijal> can we find a C example to show that the speed really differs?
<arigo> if we transform that into returning a flag, then it's a rabbit hole of issues because suddenly we're changing the signature of functions, and some of which must have the original signature
<antocuni> fijal: well, my original question was exactly whether we already measured that there is no difference, or we don't know
<arigo> it's a hard to know in advance, because small articifial C examples are probably not relevant
<arigo> so we'd need to try it out, measure, and maybe throw away the result
<antocuni> arigo: I agree it's a rabbit hole. I'm not proposing to do it, I was just curious
<arigo> so no, I'm pretty sure we ever did that
<arigo> so no, I'm pretty sure we never did that
<fijal> arigo: they are relevant in one way. if the small C examples show improvement then we don't know. But if they don't, then we do know
<arigo> fijal: no, not even
<fijal> not even?
<arigo> for example, maybe in large code bases we loose time by having the machine code be significantly larger, because of all the repetitions of "address of excdata"
<arigo> but in small code bases it doesn't show up
<fijal> ok
<fijal> so no point pretty much?
<antocuni> well, the fact that we introduced an "alloc_shortcut" which does exactly that for malloc&co makes me thinking that it was worth
<arigo> you have a point. for malloc it was done because it was easy, I guess
<arigo> so maybe we can try to remove this shortcut, and translate a complete pypy
<arigo> and then compare
<antocuni> ah true, this looks easy enough
<arigo> if you try, do it in a no-jit version
<antocuni> sure
<antocuni> else we are not really benchmarking it :)
<arigo> well, of course we need to fix the jit too if we go down that route
<arigo> so to be more realistic, the first experiment should include the jit and somehow remove the malloc shortcut there too
<antocuni> true, but this starts to be more time consuming
YannickJadoul has joined #pypy
<antocuni> arigo: for now I'm simply doing a -O2 translation with alloc_shortcut forced to be False, let's see what happens
<cfbolz> arigo: not true, we did have that feature at some point. I think in the pre-rtyper c backend
<arigo> ah
<arigo> ok
<cfbolz> 'obviously' 😉
<cfbolz> arigo: we were still interoperable with the C API at that point, after all
lazka has quit [Ping timeout: 256 seconds]
lazka has joined #pypy
xcm has quit [Read error: Connection reset by peer]
xcm has joined #pypy
<fijal> I wonder if the results from back then and the results now will be similar
ekaologik has joined #pypy
Ai9zO5AP has joined #pypy
dddddd has joined #pypy
jcea has joined #pypy
BPL has joined #pypy
tsaka__ has quit [Ping timeout: 265 seconds]
<antocuni> I didn't even know we had a pre-rtyper backend
<cfbolz> antocuni: it pas pre-anto ;-)
<antocuni> :)
<antocuni> so, here is the result of my test: https://paste.ubuntu.com/p/Kck8VGY424/
<antocuni> both versions compiled with -O2 (no jit). It seems that disabling the alloc_shorcut alone costs ~6%
<antocuni> so, it might be worth investigating whether to generalize this to all/most/many/some rpython functions
BPL has quit [Quit: Leaving]
JuxTApoze has joined #pypy
<JuxTApoze> o/ all
<JuxTApoze> is anybody here interfacing lisp through cffi? If so, can you recommend a general setup to get started so I can get acquainted with it and do some tests?
<JuxTApoze> I need some specific lisp optimizations
<JuxTApoze> thanks in advance
<antocuni> JuxTApoze: if you are asking questions about this project https://common-lisp.net/project/cffi/ this is the wrong place, sorry
<antocuni> this the right channel where to ask questions about this one, which is a library to interface C and Python https://cffi.readthedocs.io/en/latest/
<JuxTApoze> no, I'm not, I know that's a different package
<JuxTApoze> I"m referring to https://pypi.org/project/cl4py/
<antocuni> but cl4py does not seem to be using cffi. I'm not sure to understand what you are asking :)
<mattip> it seems to talk to lisp via subprocess.Popen and pipes, not via a C-API at all
<JuxTApoze> right, I'm looking into whether I can interface cl4py through cffi or if I need to pick cffi or cl4py depending on what I'm processing...
<JuxTApoze> that's one of the questions I want to find the answer for
<mattip> the question is still not clear
<JuxTApoze> if it's not a common workflow/use case already then that's part of the q&a I'm looking for...
<mattip> do you have a shared object (dll, so) or C code that provides an interface you wish to reach from python?
<mattip> or do you have strings of lisp code you wish to execute?
<mattip> two different use cases, not at all interchangeable. Change my mind.
<JuxTApoze> we are doing pre/post processing for py-bullet, a physics engine
<JuxTApoze> right...I'm still defining the use case
<JuxTApoze> the key is I need some of lisp's strength in pre/post processing the dataset for bullet
<JuxTApoze> all this is being driven from python as the main control structure/manager
<antocuni> JuxTApoze: the key question is: can you write a C program which uses the lisp engine? Does common lisp offer a C API to be embedded into a C program?
<antocuni> if the answer is "yes", then you can probably use cffi to embed common list into a python program
<antocuni> if the answer is "no", then your question does not make sense
<JuxTApoze> the goal is to use the external SBCL(common lisp) which means the lisp code would be external to that subsystem
<JuxTApoze> but!... internal lisp also partially works and is useful.
<JuxTApoze> I'll continue to work on the use case so it is more clear to explain.
<JuxTApoze> thanks for the feedback. Sometimes just being asked to clarify the question helps with determining scope.
andi- has quit [Ping timeout: 244 seconds]
andi- has joined #pypy
<antocuni> arigo, YannickJadoul: who builds wheels for cffi? I thought it was using cibuildwheel, but I don't see any .travis.yml or azure-pipeliens.yml in the repo
<YannickJadoul> antocuni: I don't know. I vaguely remember trying to sell cibw, but I'm not sure how that ended
<cfbolz> I have a vague memory that Alex_Gaynor does it
<Alex_Gaynor> cfbolz: reaperhulk and I used to do it, but we don't anymore afaik (we deleted the configuration because of that -- we can obviously restore it)
<antocuni> I'm trying to migrate the capnpy build system (https://github.com/antocuni/capnpy) from travis to azure, possibly using cibuildwheel, so I was looking for real-world projects using it
<Alex_Gaynor> antocuni: FWIW all the pyca/ stuff stuff switched from azure->github actions, which seems better. I can share our configs for that if you want
<antocuni> Alex_Gaynor: care to elaborate why it's better?
<YannickJadoul> antocuni: Happy to help setting up cibuildwheel, if you have questions or something I could look at, though :-)
<Alex_Gaynor> antocuni: the config is a bit nicer, and it's a bit better integrated with github. They share a lot of code on the backend (since MS owns github), and AFAIK the github stuff is a bit prefered
<antocuni> Alex_Gaynor: thank you, this is helpful. Do they use the same underlying infrastructure to run the code? One thing which amazed me of azure pipelines is that they are much faster than travis
<Alex_Gaynor> antocuni: yes, the actual machines are all the same AFAIK
<antocuni> cool
<antocuni> I'll try that route, then
<mattip> One thing to look out for if you have a large matrix of builds: you cannot retrigger gitlab actions like you can azure pipelines
<mattip> there is no "rerun" other than to close and reopen the PR
<antocuni> ah, this is a drawback, indeed
<mattip> for a different perspective, I prefer the pipelines workflow, the monitoring and logging is more advanced
<mattip> neither have access to aarch64, ppx64le, or s390x machines, for that you pretty much have to go with travis
<mattip> there are other alternatives for aarch64
<Alex_Gaynor> the travis aarch64 is soooooooo sloooooow. we tried to use it, but we gave up.
<mattip> NumPy uses shippable.com for aarch64
<mattip> there is another service too
<mattip> ISTR travis uses qemu to provide aarch64 over x86
<antocuni> one thing that I like about azure pipelines is that it has pytest integration and a view which shows you the individual tests, like this one: https://dev.azure.com/pyhandle/hpy/_build/results?buildId=281&view=ms.vss-test-web.build-test-results-tab&runId=2604&resultId=100064&paneView=debug
<antocuni> does github actions have the same?
<Alex_Gaynor> Don't think so
<antocuni> so, azure pipelines wins so far 😅
<mattip> antocuni: you might want to take up YannickJadoul on their offer, cibuildwheel is the closest thing today to reproducable python builds
<antocuni> yes exactly, that's why I wanted to investigate it. But while I am at it, I also wanted to switch from travis to something else, so I started to investigate alternatives
<mattip> I recently switched MacPython/numpy-wheels to from travis/appveyor to azure, it took ~3 days mostly around the upload/secrets parts on macOS
<mattip> it is nice that you can use bash scripts on the windows builds
<mattip> hint - do not try to use "set -e" on macOS, it subtly changes environment variable handling
<mattip> so when I tried to debug, things got worse
<mattip> it took so much time because the upload is the last thing after building/testing, so the cycles of changes were slow
jacob22 has joined #pypy
jacob22 has quit [Read error: Connection reset by peer]
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
jacob22_ has joined #pypy
jacob22_ has quit [Read error: Connection reset by peer]
jacob22_ has joined #pypy
tsaka__ has joined #pypy
lazka has quit [Quit: bye]
marvin_ has quit [Remote host closed the connection]
marvin_ has joined #pypy
lazka has joined #pypy
jacob22_ has quit [Ping timeout: 240 seconds]
jacob22_ has joined #pypy
BPL has joined #pypy
otisolsen70 has joined #pypy
otisolsen70_ has joined #pypy
otisolsen70 has quit [Ping timeout: 244 seconds]
otisolsen70_ has quit [Quit: Leaving]
ekaologik has quit [Read error: Connection reset by peer]
Alex_Gaynor has joined #pypy
Alex_Gaynor has quit [Changing host]
oberstet has quit [Ping timeout: 260 seconds]
lritter has quit [Quit: Leaving]
yajadoul has joined #pypy
YannickJadoul has quit [Ping timeout: 260 seconds]
xcm has quit [Remote host closed the connection]
xcm has joined #pypy
BPL has quit [Quit: Leaving]
yajadoul has quit [Quit: Leaving]
tsaka__ has quit [Ping timeout: 265 seconds]
JuxTApoze has quit [Ping timeout: 246 seconds]
omry has quit [Remote host closed the connection]