<kenaan>
arigo cffi-char16-char32 0f1cd402bb00 /pypy/doc/whatsnew-head.rst: ready to merge
<kenaan>
arigo default 6a4af0b6b51c /: hg merge cffi-char16-char32 Support the char16_t and char32_t types in cffi. This means reintroducing some surroga...
buhman has joined #pypy
<buhman>
can/do pypy modules work in cpython?
arigato has quit [*.net *.split]
bjs has quit [*.net *.split]
avakdh has quit [*.net *.split]
cerealized has quit [*.net *.split]
wallet42 has quit [*.net *.split]
pchiusano has quit [*.net *.split]
<simpson>
Are you thinking of extension modules, or of parts of the PyPy runtime?
pchiusano has joined #pypy
avakdh has joined #pypy
cerealized has joined #pypy
bjs has joined #pypy
bjs is now known as Guest60366
<arigato>
buhman: pypy has no separate extension modules of its own, so we don't understand your question
<arigato>
mattip: :-(
<arigato>
fwiw, rvmprof/src/shared/ contains tons of functions that should have the 'static' keyword but don't. this can create random conflicts on some platforms
<arigato>
but I guess it's not the problem here
<arigato>
well, there are various problems with this C code, but again "fwiw"
<buhman>
arigato: yeah, that's what I mean
<arigato>
your goal was to ask a question that we don't understand? ok, success :-)
<buhman>
I haven't read the actual build process, but it seems like those compile into binary-like/builtin things, or at least it seems like it from inside the interpreter
<arigato>
ah, you want to use one of these "modules" with cpython? no, not possible, they are built-in into pypy
<buhman>
nah
<buhman>
I mean writing a extension module that works with both pypy and cpython using the language used in /modules
<arigato>
ah
<buhman>
has that been done before/is it possible?
<arigato>
ok, I see. no, it's not possible and not really recommended anyway
<buhman>
it seems very readable to me compared to the cpython equivalents
<arigato>
yes, indeed, but it's a bit "magical"
<arigato>
nowadays we recommend cffi as a tool that works on both cpython and pypy
<buhman>
doesn't that suggest writing most of the extension code in C, then using cffi as a lightweight wrapper?
<arigato>
it depends on how much performance you want. the approach is usually to write most of the code in Python, and using cffi as a wrapper directly to existing C functions
<arigato>
about writing RPython code like in pypy/module/, we used to think it would be a good idea to compile them either to pypy or to cpython, but we never really managed that
<arigato>
there are problems of rpython, like it's a bit of a mess to call external C functions; more annoyingly, in pypy it is *built-in*, meaning you need to retranslate a whole pypy to incorporate changes (so it must be very test-driven)
<buhman>
do you think a python/cffi standard library would have similar performance to the rpython one?
<arigato>
in pypy, possibly, yes
<arigato>
with some exceptions
<arigato>
notably, there are a few places where rpython is really better:
<arigato>
when writing "interpreter-like" execution
<arigato>
for example in the 're' module
<arigato>
with a few additional hints, the RPython source code of the 're' module turns into a JITting interpreter for free
<arigato>
that's what RPython is good for: to write auto-JITting interpreters
<arigato>
it's not that good if the goal is write an interface to some C library
<arigato>
and yes, it is much less verbose than writing CPython C extension modules; but if the goal is to access a C library, then use cffi
<arigato>
to answer your original question more directly: if we were to try to turn RPython modules into modules usable by CPython, we'd hit the problem of the GC
<arigato>
the RPython source code assumes there is a good GC underneath
<arigato>
CPython doesn't have a good GC (and doesn't need one, mostly)
<buhman>
arigato: can't rpython modules targeting cpython use one of pypy's GCs instead?
<LarstiQ>
buhman: how would it interact with cpython?
<njs>
buhman: hello :-)
<fijal>
buhman: you could but having 2 GCs around is a neverending problem
<fijal>
we tried having the same sort of setup with refcounting GC but made everything very slow
<buhman>
yeah, I just read that
<njs>
buhman: the other thing to keep in mind is that on pypy, python is generally faster than C
<fijal>
I think I would much rather write RPython module with C interface and call it with cffi if I really needed to
<njs>
buhman: (because python on pypy is fast, but also because C on pypy is slow)
<fijal>
njs: not C
<fijal>
njs: CPython extensions
<fijal>
C is plenty fast
marr has joined #pypy
<njs>
fijal: I have some reason to suspect that buhman is specifically wondering about implementing Python objects in C for speed
<fijal>
you can do that
<fijal>
you just use cffi for that
<fijal>
I've done it before, it's not a terrible idea
<fijal>
(you can sometimes JUST use cffi for compact storage and not use C)
<buhman>
njs: does pypy's threading/thread synchronization implementation have better performance characteristics than cpython's?
<fijal>
I mean, it's as terrible as any "let's optimize the hell out of it" idea
<buhman>
fijal: hah
<fijal>
buhman: somewhat
<fijal>
in some cases
<fijal>
it's also better in python 3
<fijal>
(for cpython not for pypy)
<njs>
buhman: simple thread spawn/join overhead seems to be pretty similar to cpython (which makes sense, there's not a lot to optimize there)
<arigato>
the GIL logic has been tweaked in pypy; I don't know if it's better or worse than CPython's, because it has to handle different cases, like calling a C function dozens of millions times per second---which is just not possible with CPython to start with
<arigato>
so clearly, we can't release and reacquire a slow lock all the time
<arigato>
or more generally, if two threads do that, we can't have the GIL ping-pong between the two threads for every function call
<arigato>
if the question is about performance on pypy, then indeed the trade-offs can be very different than cpython
<arigato>
starting a thread is just as expensive on pypy than on cpython
<arigato>
but doing bookkeeping to figure out when existing threads can be reused, that's much faster
<arigato>
and comments like "it's never going to do a single syscall dispatch as fast as a tight C implementation"
<arigato>
are mostly false with pypy
<buhman>
if that's the case, would it be pointless to wrap a C threadpool implementation?
<arigato>
not necessarily, if it's a nicely tuned one
<arigato>
but do it with cffi
<njs>
arigato: in that line I'm specifically thinking about asking a thread to run a single syscall and then get the result back – I feel like matching C in that case is going to be difficult due to the baseline in C being very low to start with, and then due to having to hand-off the GIL
<njs>
arigato: though obviously performance intuitions are useless etc :-)
<arigato>
njs: I would say that it's already much better in pypy than cpython
<njs>
arigato: C, not CPython :-)
<arigato>
yes
<njs>
interestingly, a trivial ThreadPoolExecutor microbenchmark (just executor.submit(lambda: None).result()) seems to run faster on CPython: 40.2 us +- 1.1 us, vs pypy 62.2 us +- 8.9 us
<arigato>
what is ThreadPoolExecutor?
<njs>
the standard library thread pool implementation, concurrent.futures.ThreadPoolExecutor
<arigato>
right, we didn't look at that
<arigato>
but what I had in mind was also:
<arigato>
you can write your custom thread in Python, with a Queue.Queue()
<arigato>
or, alternatively, use a C thread pool, and keep multithreading out of Python completely
<njs>
I doubt the ThreadPoolExecutor cpython vs pypy thing matters much. the annoying thing about async I/O is that often the operation you are scheduling only needs like 100 ns, but sometimes it needs 10 ms, so you end up paying 50 us to put it in a thread
<arigato>
njs: btw, your benchmark is wrong (at least with current pypy3)
<arigato>
I get 4.2s on CPython 3.5 and 1.4s on PyPy
<arigato>
PyPy3
<njs>
arigato: I was using a 3.5 nightly from a few weeks back, so huh
<njs>
what I did was literally: pyperf timeit -s 'import concurrent.futures; tpe = concurrent.futures.ThreadPoolExecutor(); f = lambda: None; tpe.submit(f).result()' 'tpe.submit(f).result()'
<arigato>
trying it with "pypy3 -m timeit", gives wildly varying results if run several times
<njs>
I get the same result as you from your script, though
<arigato>
the conclusion seems to be "don't use pyperf"... :-/
<arigato>
also, it seems to run 10000 iterations only, and even sometimes 1000
<arigato>
these are typically not enough for the JIT to warm up
<njs>
so an interesting thing about this is that it means that on pypy, using ThreadPoolExecutor is actually faster than spawning a new thread every time. On CPython it's ~the same :-) (at least on my linux laptop)
<arigato>
ok, then exactly what I guessed above :-)
<arigato>
note that there is a possible corner case with this kind of code spawning new threads constantly
<njs>
and 100,000 iterations, for CPython I get 4.056 sec for ThreadPoolExecutor, 4.215 sec for spawn/join; and for PyPy I get 1.425 sec for ThreadPoolExecutor, 2.158 sec for spawn/join
<njs>
(that's all wall clock times)
<njs>
arigato: oh?
<arigato>
the problem is that t.join() doesn't wait for the actual thread to stop
<arigato>
the thread may stop only slightly afterward
jamescampbell has quit [Remote host closed the connection]
<arigato>
so in theory, you could end up with lots of threads that are almost finished, but still all exist at the same time
<arigato>
given that you can't have more than a few hunderd threads on 32-bit, there is a slight risk, maybe
<njs>
currently in trio I do the spawn/join thing because it's *so* much simpler than dealing with thread pools and the cost is pretty minimal
<njs>
huh. I'm surprised to hear that - is this a weird quirk of pthread_join, or a weird quirk of python?
<arigato>
of cpython
<arigato>
though I guess pypy has the same one
<arigato>
.join() in python doesn't call pthread_join()
<njs>
are there like, any primitives in CPython's threading abstraction that aren't implemented in a really weird way?
<arigato>
in fact you'll find that pthread_join() is never called by cpython
<arigato>
I don't know :-/
<njs>
though, ugh. actually, now that you mention it, the way I synchronize to avoid spawning too many threads is also subject to this race condition even if python handled it correctly. Though... I guess it would be easily fixable if .join() worked :-)
<arigato>
yes, but there is no fix at all
<arigato>
that's also because all threads inside python are detached, never joinable
<njs>
I was just grepping to check for that :-)
<arigato>
(from the point of view of pthread)
<arigato>
if you organize your framework like that, are performance issues still too important?:
<arigato>
you call some system call from from the main thread
<njs>
...even fixing this with a thread pool does not seem trivial, given that I would want a thread pool of unbounded size
<arigato>
but you use SIGALRM or something to interrupt the syscall regularly
<njs>
that would be, umm. a very different framework.
<arigato>
so if the function is fast, it's unlikely it will hit SIGALRM; and if it's slow, the syscall will return with EINTR, and then you delegate to a thread
<njs>
oh, I see, you just mean specifically for like a millisecond while submitting an I/O operation
<arigato>
yes
<njs>
so the obvious problem is that windows doesn't have signals :-)
buhman has quit [Ping timeout: 255 seconds]
<arigato>
or simply keep the SIGALRM ticking every ms, and do nothing; just check for EINTR
<arigato>
yes, I'm guessing it's not a very good and general solution either :-)
<njs>
oh, and the other obvious problem is that if the idea is to let us re-use the stdlib io module, then that internally checks for and ignores EINTR :-)
<arigato>
yes
<arigato>
you need to use cffi to call the C functions directly
<arigato>
again, this approach is probably suffering from the same issues that made you drop the async I/O C API
<arigato>
I think that the best solution for pypy is the same as the best solution for C
<njs>
I wonder how reliabl signals actually are at interrupting blocking I/O on unixes
<arigato>
write a thread pool and handle all the associated messes
<lukasa>
I just want to say to you two that this is maybe the most horrifying conversation I've ever read, and I'm delighted that this is not my problem.
<njs>
I was also pondering a thread pool API like: submit work, then spin for, I dunno, 10 microseconds or so to see if it finished, and then give up and switch to doing something else
<arigato>
lukasa: :-)
<njs>
lukasa: hah
<arigato>
njs: note that spinning in Python makes little sense, given that it will spin with the GIL (unless waiting for progress from a purely-C thread)
<njs>
I think this would might require implementing the thread pool in C (or Rust or whatever) though, because you really need to drop the GIL while spinning
<njs>
heh, jinx
<arigato>
:-)
<arigato>
I doubt spinning in Python is ever a win, anyway
<arigato>
but that's easy to check case-by-case
<njs>
yeah, and this is really borrowing trouble from the future anyway
<arigato>
ah, I see, the goal would be to avoid switching to a different Python thread if the result shows up very quickly
<arigato>
fwiw, this should be automatic in PyPy
<njs>
arigato: ah, more like the goal would be to avoid the overhead of switching to a different task (~= green thread) if the operation doesn't actually block
<arigato>
ah, you're using stacklets
glyph has quit [Quit: End of line.]
<njs>
re-entering the I/O loop from a thread is quite a complex operation
<njs>
arigato: well, async/await, but same idea
glyph has joined #pypy
<arigato>
ah ok
<njs>
anyway, the thought is fairly vague and it will be some time before the issues could even possibly matter for me, so I should probably leave it there for now
<arigato>
with stacklets I could vaguely imagine completely crazy hacks
<njs>
but this pthread_join thing may keep me up a bit :-(
<arigato>
if anything, that's a good reason for writing or reusing a thread-pool written in C
antocuni has joined #pypy
oberstet has joined #pypy
<njs>
I guess with sufficient elbow grease I could implement my own thread.start() and thread.join() with cffi :-)
cstratak has joined #pypy
cstratak has quit [Remote host closed the connection]
<njs>
or... hmm. actually, probably not, because I don't think there's any way to pass through the correct threadstate, so, you know, subinterpreter support wouldn't work. clearly subinterpreter support is the main objection to this plan :-)
<arigato>
right, but then likely the most performant result to implement e.g. "read()" would be as a cffi function, which does the queueing and wait-for-a-short-while in C...
antocuni has quit [Ping timeout: 260 seconds]
<arigato>
ah well, subinterpreters in cffi kind of work too
<arigato>
you'd have one thread pool for all interpreters
<arigato>
which is probably what you want anyway
<njs>
yeah, that would be fine
<arigato>
but then indeed the logic to start threads needs to be written in C
<njs>
I was thinking about options that didn't involve adding native code to the package, because that's a hassle that I've managed to avoid so far
<arigato>
and stored in some C globals
<arigato>
right
<njs>
but correctly starting up a cpython thread (or correctly scheduling a cpython function onto a thread that's shared across subinterpreters) requires explicitly passing through the tstate from the parent thread
<arigato>
that's not the right approach with cffi, but what the right approach is, I'm not sure
<njs>
I mean, somehow that's what has to happen IIUC; no idea what cffi's API for doing it would look like, if it existed :-)
glyph has quit [Quit: End of line.]
<arigato>
I had in mind a thread pool where each thread would only run simple C functions, so not run any Python code at all
glyph has joined #pypy
<arigato>
from Python's point of view there are no threads
<njs>
that's the second time glyph dropped, I keep feeling like we're scaring him off with this conversation (which would be entirely reasonable)
<arigato>
:-)
<njs>
arigato: ah, right, that is another way to do it
<arigato>
yes, I'm just throwing out ideas without really knowing what would work and what wouldn't
<kenaan>
arigo default 07257b27db4b /pypy/module/cpyext/: Fix: 'tp_descr_get(self, NULL, type)' used to give a real undefined value for the second argument if implemented in...
<kenaan>
arigo default 4a138a88b24a /pypy/module/cpyext/: tp_descr_get on built-in types
<kenaan>
arigo default 7ca42aeab616 /pypy/module/cpyext/: tp_descr_set on built-in types
<nimaje>
I don't think pypy3 should segfault if you use ThreadPoolExecutor, https://ptpb.pw/F08T.py that segfaults for me in most cases on freebsd 11 with pypy3 5.7.1 (with lower N segfaults get less common)
jcea has joined #pypy
jacob22_ has quit [Quit: Konversation terminated!]
<kenaan>
plan_rich default cf5d41cbe737 /rpython/rlib/rvmprof/src/shared/: copy over revision 8426aa942feecfa48d92952654e91788248655b8 (includes several pull request, such as """real tim...
Rhy0lite has joined #pypy
antocuni has joined #pypy
arigato has joined #pypy
<kenaan>
plan_rich default 5a98d3aa0153 /: adjust _vmprof.enable parameters to carry real_time over to the vmprof C library
Taggnostr2 has joined #pypy
Taggnostr has quit [Ping timeout: 260 seconds]
<arigato>
nimaje: I don't get any segfault with a more recent pypy3 than 5.7.1
<nimaje>
yay, pypy/rpython/rlib/rvmprof/src/shared/machine.c errors on *BSD and the message "Unknown compiler" is really constructive (better would be "Unknown platform" or something like that)
<kenaan>
plan_rich default 82f30247c9bb /: fix tests and scatter real_time parameter to other functions needed
<arigato>
either that can be explained, or we give up on lto for now I suppose
<antocuni>
yeah, I suppose
<antocuni>
note that I don't know anything about that issue, apart what I read from the comments in the PR
<arigato>
right, and I only saw mattip commenting here very early this morning
<arigato>
(thanks plan_rich for looking, btw)
<arigato>
(maybe our first real bug report to gcc!)
<antocuni>
I agree that if we don't find a reasonable explanation, it makes LTO a bit scary
<arigato>
(well, ignoring __seg_gs, for stm)
kipras`away is now known as kipras
<arigato>
I fear it is some combination of issues, not even "just" lto
<arigato>
or maybe, more likely, if we look very very carefully at some code in rvmprof/src/shared/, we'll see a detail that is officially "undefined behaviour", and so gcc happily compiles something else as an infinite loop
<arigato>
I wouldn't be too surprized because some of this C code is full of "issues", with quotes
<arigato>
to start with, a line like (void)read(fd, ...); is almost always buggy
<njs>
fortunately there's a simple procedure for identifying undefined behavior
<arigato>
but there are also casts from long to int that can really loose information, etc.
<arigato>
and tons of functions and a few global variables that are all not defined with "static"
<arigato>
njs: is there?
<njs>
yeah. you file an issue in the gcc bugzilla and then someone closes it as invalid
<arigato>
hah
<antocuni>
makese sense :)
<bremner>
bonus points if you mention the code works in clang
<mattip>
it seems the release should either disable vmprof, or lto or both :(
<njs>
wow yeah this code is umm. maybe you should go through and replace all the read()'s with a read_exactly() helper and see if that fixes the lto bug :-)
<antocuni>
what is the performance improvement given by lto?
<mattip>
I tried with read_word, that did not help
<plan_rich>
arigato, uh. I just added the real_time parameter
<arigato>
ok, so I should try with an older version? which one?
<plan_rich>
8db59bdb
<plan_rich>
(which is 0.4.7)
<antocuni>
mattip: this link seems to be about PGO, not LTO
<mattip>
antocuni: ahh, right. Let me look for the correct link
<arigato>
ok, then, it works for me
<arigato>
gcc (GCC) 6.3.1 20170306
rmesta has joined #pypy
<arigato>
if you're testing on bencher4, note that it has an old gcc, where I guess LTO was new
<mattip>
arigato: I am using gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4)
<arigato>
the pressing question is: the buildslave on bencher4 uses gcc (Ubuntu 6.2.0-3ubuntu11~14.04) 6.2.0 20160901
<arigato>
so does this version work or not?
<antocuni>
arigato: I noticed the bug using a nightly build
<arigato>
ok
<antocuni>
which I supposed was built on bencher
rmesta has left #pypy [#pypy]
<arigato>
fwiw, I'm also getting "could not load libunwind at runtime. error: libunwind.so: cannot open shared object file: No such file or directory" with a trunk, but not with a slightly older one
<antocuni>
arigato: I confirm that the bug is present with the latest nightly
<arigato>
googling around, it seems there are a few bugs in gcc 6.2.x about LTO
yuyichao has quit [Ping timeout: 246 seconds]
<plan_rich>
ok, in any case I made the code in symboltable slightly better
<kenaan>
plan_rich default 9a792fd023fd /rpython/rlib/rvmprof/src/shared/symboltable.c: introduce read_exactly as proposed by njs, handles EINTR
<arigato>
plan_rich: "slightly"
<arigato>
EINTR needs a loop
<arigato>
and most of the places that call read_exactly() could be sent into an infinite loop after a read failure
<arigato>
(not an infinite *empty* loop I think)
<mattip>
my memory fails me, could it be that mjacob added -flto on March 4 (commit 7331592bdd2b) with no benchmarks and no fanfare?
<arigato>
it's likely, it was during the sprint I think or immediately afterwards
<arigato>
we checked that it seemed to work during the sprint
<mattip>
then speed.pypy.org won't show the commit since it's on a branch :(
marr has joined #pypy
<arigato>
it's the branch merge that you need to look at
<mattip>
march 8, maybe
<nimaje>
no idea why clang fails to read object files it created, I will try to translate with gcc now (testing_1.o: file not recognized: File format not recognized)
<plan_rich>
arigato, hm that places that read_exactly fails will bail the function vmp_scan_profile because cannot_read_profile != 0
yuyichao has joined #pypy
<arigato>
plan_rich: yes, but in the meantime it can cause lseek() to be called with random uninitialized offsets
<plan_rich>
arigato, right.
<arigato>
but indeed it should be just a few syscalls like that (whose errors are ignored anyway)
<arigato>
gdb seems entirely useless on a large -flto program, or maybe it's something specific to pypy
<arigato>
it gets into 100% cpu for at least minutes (didn't try to wait for longer so far)
<arigato>
I think that is enough to make me vote for disabling -flto as an immature technology for now
<arigato>
otherwise, the next obscure pypy crash might get really REALLY hard to track
<arigato>
I think gdb is trying to load symbols (from a pypy built with -flto and all normal options, plus "-g")
<arigato>
that looks likely, actually
<arigato>
I remember that gdb takes maybe 5-10 seconds to load the symbols from one .c from pypy
<arigato>
now it's probably trying to load *all* of them because of the way it was compiled
<arigato>
400 files, that would make an estimate of ~from half an hour to one hour
<arigato>
assuming that it scales linearly, which I bet it doesn't *at all* because of some linear walking or something
<jiffe>
so I have /usr/lib64/libpython2.6.so.1.0 but no libpython2.6.so, how can I link against this?
<arigato>
that's a question for distutils, I think
<jiffe>
well I need to be able to link against that from the cffi build script
<arigato>
i.e. you have the same problem for any "setup.py" building a cpython module
<arigato>
yes, but I'm saying that I don't feel responsible for all the obscure distutils-specific issues, and also I don't know them very well
<jiffe>
I see
<arigato>
I would ask around on #python for example, as you're likely to hit the same problem for anything not related to cffi that needs to compile CPYthon C extension modules
<jiffe>
ok thanks
vkirilichev has quit [Remote host closed the connection]
<arigato>
...also, btw:
<arigato>
jiffe: are you sure you need to link against libpython*.so*?
<arigato>
a CPython C extension module is typically not linked against that
<jiffe>
arigato: the cffi build process is inserting that itself, I don't have that library listed anywhere in my script
<arigato>
uh
<arigato>
it shouldn't
<arigato>
maybe it's distutils that inserts that, for some reason I don't understand
<arigato>
(there is no "libpython" or anything like that in the source code of cffi)
<jiffe>
yeah, if I was linking against it myself I could fix this via -l:libpython2.6.so.1.0 as I'm doing with the other libraries
<arigato>
maybe it's line 760 of distutils/command/build_ext.py?
<arigato>
template = "python%d.%d"
<arigato>
it seems to actually link with libpython2.6.so
<jiffe>
yeah I see what you're saying
<jiffe>
I might be able to do something with that Py_ENABLE_SHARED
<arigato>
mattip: ok, I waited for half an hour for symbols to load before giving up. I think that's enough to make the next obscure issue entirely undebuggable
<arigato>
mattip: so I think we should "fix" the vmprof issue by removing -flto, also in the release 5.8
jamescampbell has quit [Read error: Connection reset by peer]
<mjacob>
mattip: armin fixed a few bugs and merged the branch after the sprint
<arigato>
indeed, though this branch also contains other things
amaury has joined #pypy
<arigato>
now we're annoyed that -flto cannot be removed without loosing a little bit of performance
<mjacob>
i should have done it on a separate branch maybe, but i think you were aware that you also merged the -flto change, right?
<arigato>
but I'm still strongly +1 for removing it anyway
<arigato>
mjacob: yes
<arigato>
I documented it explicitly in the merge, c05892e069b0
<mjacob>
arigato: are there any issues other than that gdb is very slow?
jamescampbell has quit [Quit: Leaving...]
<arigato>
mjacob: yes, at least gcc 6.2.0 gives an apparent miscompilation (but making sure it's really a miscompilation is hard, because gdb cannot be used)
jamescampbell has joined #pypy
<mjacob>
sorry that i didn't read all of the log
<mjacob>
was this miscompile fixed by a later version of gcc?
<arigato>
sure, np
<arigato>
yes, at least it doesn't show up on my 6.3 arch linux
<arigato>
it may be an unrelated difference, of course
<mjacob>
how can i reproduce it?
<arigato>
well, you can *attempt to*, by translating pypy, current trunk, and then using the few lines of code in:
<arigato>
gcc thinks a version of `read` with constants from this call site (which is `read(fileno, &chars, sizeof(long))`) would be cool to have
<arigato>
I'm unsure where it fishes the definition of `read` from
<arigato>
inside `unistd.h` there is only: `extern ssize_t read (int __fd, void *__buf, size_t __nbytes) __wur;`
<mjacob>
i'm surprised that, even with lto, something from libc gets inlined
<arigato>
yes, me too
<mjacob>
i'll be back after a long walk while gcc 6.2 & 6.3 compile
pilne has joined #pypy
mattip has left #pypy ["bye"]
oberstet has quit [Ping timeout: 260 seconds]
<nimaje>
arigato: with a newer pypy3 the ThreadPoolExecutor segfault seems not to occur
<arigato>
uh, "cool" I guess
jamescampbell has quit [Remote host closed the connection]
<nimaje>
well, I translated first with CC=gcc (without that it failed as clang could not recognize some .o file) tested with that pypy3, then cleared the usession*/testing_1 dir with make clear, removed the CC line and -flto, did ´make´ and tested from that dir (hope that was ok)
<arigato>
yes, sounds fine
tbodt has joined #pypy
<nimaje>
ok, now I know why building from ports works fine, but directly cloned from hg not, the port deactivates lto as it is not supported by ld on freebsd :)
<nimaje>
and why does lib-python/2.7/distutils/sysconfig_pypy.py use gcc/g++ instead of cc/c++ (in _init_posix() )?
vkirilichev has joined #pypy
[Arfrever] has joined #pypy
oberstet has joined #pypy
Tiberium has joined #pypy
mattip has joined #pypy
higga has joined #pypy
higga has left #pypy [#pypy]
jamescampbell has joined #pypy
Rhy0lite has quit [Quit: Leaving]
<kenaan>
mattip default c6a3a65d4975 /rpython/: move lto to build option, default is False since some gcc versions produce bogus code
<plan_rich>
uh, I should include errno.h to that file...
<kenaan>
plan_rich default beebaffe3671 /rpython/rlib/rvmprof/src/shared/symboltable.c: fix for translation, I should have included errno.h. translation started annotation
jamescampbell has quit [Read error: Connection reset by peer]
kolko has quit [Ping timeout: 240 seconds]
jamescampbell has joined #pypy
kolko has joined #pypy
jamescampbell has quit [Read error: Connection reset by peer]
jamescampbell has joined #pypy
<arigato>
in my opinion, after looking with objdump, it's a bona fide gcc bug
<arigato>
maybe already resolved in 6.3.1
<kenaan>
mattip default 66ce5daf5ff9 /pypy/doc/: clean up many sphinx warnings, update copyright and version
jamescam_ has joined #pypy
jamescampbell has quit [Read error: Connection reset by peer]
lritter has joined #pypy
vkirilichev has quit [Remote host closed the connection]
<mattip>
arigato: I guess the commits around tp_descr_set, tp_descr_get should be in the release, right?
<arigato>
maybe
<arigato>
yes
<mattip>
ok, thanks for that btw
<arigato>
you removed -flto, right? then the release doc needs to be updated
<mattip>
correct
<kenaan>
mattip release-pypy3.5-5.x bd5301bc87ef /rpython/: move lto to build option, default is False since some gcc versions produce bogus code (grafted from c6...
<kenaan>
mattip release-pypy2.7-5.x df2a7a142dd4 /rpython/: move lto to build option, default is False since some gcc versions produce bogus code (grafted from c6...
<kenaan>
arigo release-pypy2.7-5.x 59850276ad66 /pypy/module/cpyext/: Fix: 'tp_descr_get(self, NULL, type)' used to give a real undefined value for the second argument if im...
<kenaan>
arigo release-pypy2.7-5.x 762ce5327b5c /pypy/module/cpyext/: tp_descr_get on built-in types (grafted from 4a138a88b24adf195f546cf96aac14a96dde4648)
<kenaan>
arigo release-pypy2.7-5.x 57722d241bf1 /pypy/module/cpyext/: tp_descr_set on built-in types (grafted from 7ca42aeab616fee1898cd0b2bdaef48be7825e3b)
<kenaan>
arigo release-pypy3.5-5.x b746b3cc0b2a /pypy/module/cpyext/: Fix: 'tp_descr_get(self, NULL, type)' used to give a real undefined value for the second argument if im...
<kenaan>
arigo release-pypy3.5-5.x 932fcb68bd70 /pypy/module/cpyext/: tp_descr_get on built-in types (grafted from 4a138a88b24adf195f546cf96aac14a96dde4648)
<kenaan>
arigo release-pypy3.5-5.x a39af0be3a22 /pypy/module/cpyext/: tp_descr_set on built-in types (grafted from 7ca42aeab616fee1898cd0b2bdaef48be7825e3b)
<arigato>
(I did not check that tp_descr_xxx works on py3.5, but I guess it does)
<mattip>
arigato: the lto documentation made it into 66ce5daf5ff9
<mattip>
beta, buyer beware
vkirilichev has joined #pypy
jamescampbell has joined #pypy
jamescam_ has quit [Read error: Connection reset by peer]
jamescampbell has quit [Read error: Connection reset by peer]
jamescampbell has joined #pypy
<kenaan>
plan_rich vmprof-0.4.8 a4f077ba651c /: implement stop/start_sampling and copy over new implementation from vmprof-python.git 6142531a47d6c294b1fd...
jamescampbell has quit [Ping timeout: 246 seconds]