antocuni changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://botbot.me/freenode/pypy/ ) | use cffi for calling C | "PyPy: the Gradual Reduction of Magic (tm)"
tormoz has joined #pypy
ronan has joined #pypy
marky1991 has quit [Remote host closed the connection]
awkwardpenguin has quit [Remote host closed the connection]
<kenaan>
amauryfa py3.6 cd17bd421135 /pypy/module/_io/interp_io.py: Match CPython error message
<kenaan>
amauryfa py3.6 dbcd63982736 /pypy/module/_warnings/: warnings.warn() now has a "source" argument. In the future, ResourceWarnings might log the traceback where this o...
dddddd has joined #pypy
amaury has quit [Quit: Konversation terminated!]
oberstet has quit [Ping timeout: 248 seconds]
flok420 has joined #pypy
<flok420>
anyone experienced a 10x slowdown when using pypy on a raspberry pi?
marr has joined #pypy
awkwardpenguin has joined #pypy
marky1991 has quit [Ping timeout: 264 seconds]
awkwardpenguin has quit [Ping timeout: 252 seconds]
<cfbolz>
flok420: what code are you running?
<cfbolz>
Is the same code also slow on x86?
<flok420>
cfbolz: it is a chess program. on x86, the pypy version is faster. not a lot but definately faster
<fijal>
Attack is interesting, but it also contains lots of details about branch predictor
<cfbolz>
flok420: thanks
dddddd has quit [Remote host closed the connection]
oberstet has joined #pypy
tomv has joined #pypy
<tomv>
hi. what are my chances to get an extension using PyFrameObject f_back work if I try to imitate f_code but don't have a clue?
<arigato>
flok420, cfbolz: testing it in an emulated old raspberry, but there, I see that the JIT tracing takes longer than the whole CPython time
<arigato>
10 seconds for only 250 traces
<arigato>
so maybe the warm-up time is just incredibly long
<fijal>
tomv: you need to use transparent proxies
<arigato>
I thought that transparent proxies were not relevant any more?
<tomv>
I can't find that in pypy/module/cpyext/frameobject.py and the documentation page seems not to encourage using it...
<arigato>
we could certainly support f_back, but that's annoying performance-wise because it means we need to emulate the whole chain of frames whenever someone asks for a PyFrameObject
<tomv>
hm.
<arigato>
how does your extension obtain a PyFrameObject in the first place?
<tomv>
via PyThreadState
<tomv>
->frame
<tomv>
and then it tries to traverse to the bottom using f_back
<fijal>
arigato: they sort of work for some things
<fijal>
(like emulating frames and tracebacks)
<fijal>
they're actively used to fake tracebacks in few projects
<arigato>
tomv: but PyThreadState doesn't have a frame attribute in pypy's cpyext
<flok420>
arigato: well further down the road things are also slower. in my example in the bug-report I showed the times after things had started. also in my tests which run for 1-2 minutes, there's also a 4x slow down to be seen
tayfun26 has joined #pypy
<arigato>
flok420: I'm testing with 400 seconds, and there the times seem to converge
<arigato>
my emulator is slower than your hardware, and as an emulator real-times should not really be relied upon
<arigato>
but I get:
<arigato>
for pypy, depth 7 time 266801
<arigato>
for cpython, depth 7 time 382035
<arigato>
that's pypy 5.10 versus cpython 2.7.3 (which is a bit faster than future cpython 2.7.x at least on x86)
<flok420>
I use 5.6.0
<flok420>
I wonder why it is 200 elo weaker than the cpython version then. I'll see if I can figure that out.
<arigato>
it is really bad at the beginning, up to maybe 200-300 seconds
oberstet has quit [Ping timeout: 265 seconds]
oberstet has joined #pypy
Hasimir has quit [Read error: Connection reset by peer]
Hasimir has joined #pypy
antocuni has quit [Ping timeout: 250 seconds]
rubdos has quit [Quit: Leaving]
jaffachief has joined #pypy
rubdos_ has quit [Quit: WeeChat 1.9.1]
rubdos has joined #pypy
tomv has quit [Ping timeout: 255 seconds]
tomv has joined #pypy
tomv is now known as Guest79257
<Guest79257>
arigato: hm. that probably would have been the next failure. :)
Guest79257 is now known as tomv
<tomv>
so now I threw out their tracing for a jit, that worked better...
tomv has quit [Ping timeout: 250 seconds]
Rhy0lite has joined #pypy
raynold has quit [Quit: Connection closed for inactivity]
<tos9>
arigato: I suspect it's probably not OSX specific, but I dunno whether that .so is 1.14
<arigato>
yes, that looks reasonable
<tos9>
arigato: the release looks like it was yesterday or so, so if you got an update today or yesterday it's probably it
<arigato>
ah, yes
<arigato>
it probably works nicely as long as the gdbm cffi module was compiled with the same version of gdbm as available at runtime
<tos9>
Yeah I'm trying to hunt down 1.13 but looks like it's not in homebrew-versions
<arigato>
we'd get the exact same problem if we were just distributing C code
<arigato>
basically you can't upgrade to gdbm 1.14 without breaking binary compatibility
* tos9
nods
<tos9>
arigato: and statically linking it is a no go because of GPL fun
<arigato>
yes
<tos9>
hooray software
<arigato>
also, not something we want to do anyway, I think
* tos9
nods
<arigato>
try to execute "pypy lib_pypy/_gdbm_build.py"
<arigato>
if it fixes the issue, that's it
<tos9>
ah right... I do have that
* tos9
tries
<tos9>
It does
<arigato>
ok
<arigato>
I guess the homebrew version of pypy is or will be correct
<tos9>
arigato: would you suggest I file that bug for Homebrew then?
<tos9>
yeah k
<arigato>
I suspect it will be correct, if they re-translate the whole pypy anyway then they won't pick our own _gdbm_cffi.dylib anyway
<tos9>
Makes sense, will close that ticket
<tos9>
arigato: thanks!
<arigato>
I'm sooooo happy with cffi
<tos9>
you should be :)
<tos9>
we're all pretty happy with it
<arigato>
without it, we'd still be stuck having compiled in the pypy binary something that requires gdbm 1.13 and pypy wouldn't even start if you have gdbm 1.14
<arigato>
i.e. the situation we still have with openssl on pypy2
<tos9>
yeah this is a bit better :D
marky1991 has quit [Ping timeout: 248 seconds]
Hasimir has quit [Read error: Connection reset by peer]
<arigato>
...got the question: can we extract the pointer from a cffi pointer object from a CPython C extension module?
jaffachief has quit [Quit: Connection closed for inactivity]
ronan has quit [Quit: Ex-Chat]
tormoz has quit [Remote host closed the connection]
tormoz has joined #pypy
ronan has joined #pypy
ronan has quit [Client Quit]
<arigato>
(if anyone is interested, the answer is No, but you can hack easily: copy the CDataObject structure from _cffi_backend.c and use that to access directly the c_data field)
<arigato>
(only works on CPython, not with PyPy's cpyext, but really, why write a CPython C extension module if you're thinking about PyPy too)
ronan has joined #pypy
lritter has joined #pypy
awkwardpenguin has quit [Remote host closed the connection]
amaury has joined #pypy
<fijal>
arigato: did you read that attack btw?
<arigato>
the attack on Intel/AMD CPUs based on code that runs in speculative mode for a condition that is false?
<arigato>
like often, it does sound obvious in hindsight
<fijal>
yes
<fijal>
but the details of branch predictor are very cool
<arigato>
yes
<fijal>
I wonder if we can use that data in pypy somewhere
<arigato>
I admit I already read about the official part of that in Intel docs
<fijal>
this contains more details that are reverse engineered
<fijal>
unclear to me if they can be used or not
<exarkun>
is it Intel/AMD now, not just Intel? I read that AMD is unaffected because privilege-restricted pages don't get cached as a result of the failed access attempt.
<arigato>
yes, unclear to me if using them is even a good idea, as it's likely specific to one CPU
<arigato>
exarkun: AMD too, yes
<exarkun>
Ah I see that blog post talks about AMD as well
<runciter>
they only repro'd it on an AMD CPU if the eBPF JIT is turned on
<fijal>
they talk how would you do that without eBPF
<fijal>
seems like more work
<runciter>
ROP, right?
<runciter>
which they call "annoying"!
<arigato>
the "Variant 3" is even crazier than the rest
<arigato>
you read from userspace a kernel address, which you know will trigger a fault
<arigato>
but the fault is not delivered before some instructions after that read have speculatively executed, with the real data
<njs>
there are two pretty distinct underlying problems described that project zero blog post. IIUC, "Variant 1" and "Variant 3" both rely on the cringingly bad decision by Intel to ignore privilege checking during speculative execution, so if the kernel is mapped into your address space and protected by the normal protection things, then you can still read anything you like out of it.
<njs>
As far as we know so far Intel is the only one to make this terrible decision; AMD and ARM are unaffected. This is the thing that's mitigated by the OS/hypervisor patches that are rolling out everywhere now.
<simpson>
njs: Scuttlebutt is that it's due to the northbridge integration that happened at the beginning of the affected µarch cycle. Those parts of the CPU were imported without being rewritten fully into µcode, and ossified over a few iterations, and now there's no way that µcode alone can fix it.
<njs>
Then "Variant 2" is a different thing, based on tricking the CPU branch predictor into speculatively jumping to the wrong location. Using this to pull off an attack depends on knowing some delicate details about the code you're trying to attack, but it's much more general (affects basically all CPUs), and doesn't necessarily require the attacker and victim to share any memory mappings.
forgottenone has quit [Quit: Konversation terminated!]
<njs>
The "Meltdown" (Variant 1/3) and "Spectre" (Variant 2) papers are pretty readable actually
nunatak has quit [Remote host closed the connection]
amaury_ has joined #pypy
amaury has quit [Ping timeout: 240 seconds]
tbodt has joined #pypy
amaury_ is now known as amaury
jcea has joined #pypy
awkwardpenguin has joined #pypy
tbodt has quit [Read error: Connection reset by peer]
awkwardpenguin has quit [Remote host closed the connection]