cfbolz changed the topic of #pypy to: PyPy, the flexible snake (IRC logs: https://quodlibet.duckdns.org/irc/pypy/latest.log.html#irc-end ) | use cffi for calling C | if a pep adds a mere 25-30 [C-API] functions or so, it's a drop in the ocean (cough) - Armin
<bbot2> Failure: http://buildbot.pypy.org/builders/pypy-c-jit-linux-s390x/builds/1200 [mattip: force build, release-pypy3.6-v7.x]
jcea has quit [Ping timeout: 250 seconds]
jcea has joined #pypy
jvesely has joined #pypy
jcea has quit [Remote host closed the connection]
commandoline has quit [Quit: Bye!]
jcea has joined #pypy
commandoline has joined #pypy
jcea has quit [Remote host closed the connection]
jcea has joined #pypy
dddddd has quit [Remote host closed the connection]
jcea has quit [Quit: jcea]
ammar2 has quit [Ping timeout: 246 seconds]
ammar2 has joined #pypy
jvesely has quit [Quit: jvesely]
forgottenone has joined #pypy
forgottenone has quit [Client Quit]
<mattip> this segfaults in the 32-bit docker image with gcc7.3 (why should the 32-bit docker have a different compiler than the 64-bit one?)
<mattip> pytest.py rpython/jit/backend/x86/test/test_runner.py -k test_and_mask_common_patterns
<mattip> before I dive in to try to figure out why, is there any reason this should fail?
<mattip> there is this warning as tests start:
<mattip> Warning: getcontext failed: using another register retrieval method...
<mattip> it fails when i is 16, so rop.INT_AND(box, ConstantInt(2 ** 16 -1))
<mattip> that warning is comming from boehm gc, here is the issue that converted an ABORT when getconext fails to a WARNING
<mattip> maybe it is messing with the register allocation expected in the JIT backend?
* mattip moving on to fixing ssl, this is beyond my depth
<tos9> random odd question which maybe is the slightest deal related to revdb -- can however revdb is implemented be used to diff two program executions
<tos9> I assume revdb basically serializes the whole world?
<tos9> If I have then two programs and I want to know all the objects that differ between them, is that something borrowable from how revdb works?
<tos9> (and then once I have those objects probably I Want to heuristically suggest some that are worth looking at -- the use case is also debugging, but where I don't want to trawl through two debuggers looking for the place their execution diverges)
<cfbolz> tos9: not natively, but you can probably script that?
<energizer> tos9: also check out birdseye
<mattip> since birdseye writes to a sql database, you might be able to use the ast and write some sql to find the first divergence
<tos9> cfbolz / energizer: cool, thanks
junna has joined #pypy
<cfbolz> tos9: do you have a difference at the end of the run?
<tos9> cfbolz: yes
<tos9> cfbolz: I'm reimplementing a section of a library, and get different results at the end
<cfbolz> If yes, you could set a watch point for the different result, then say bcontinue and observe where it changes
<tos9> so I want quick answers to "tell me the first time within some set of stack frames that an object was different"
<tos9> cfbolz: (if I understand that idea properly) -- I want the intermediate object, not the end one
<cfbolz> tos9: and there are tons of intermediate new objects created all the time?
<tos9> cfbolz: I have f(foo) -> ... -> g() -> ... -> X != Y
<tos9> And I implement g
<tos9> and then for some code paths (arguments to f), g() returned some wrong result
<tos9> So I want to know, if I have g and originalG, how the objects between f|g and X differ from the ones between f|originalG and Y
<tos9> and there are lots of them
<cfbolz> Right
<tos9> so somewhere between the ... after g() I get some different objects (ones outside the call stack of g()) and I want to see them basically
<tos9> which right now I'm finding manually in a debugger
<tos9> I'm sure I could script that though yeah, so I guess pointing out that probably just scripting a regular debugger to do that is equally tenable
<tos9> (the annoying bit in my case is that I switch back and forth between git branches and python interpreters to do it, so that's a bit of extra annoyance)
<cfbolz> tos9: can you maybe do something simpler like hash the objects at every step, and print the hashes?
<cfbolz> Or just a trace hook that prints events and then you diff the logs
<tos9> (thinking whether that would have caught the last bug I manually tracked down)
<tos9> probably worth a shot I guess
<tos9> the trace hook I mean -- for hashing, I think the issue still would be I don't know which objects to look for I think?
<tos9> Uh although I guess the idea is to instrument both implementations of g's return values, sorry, now I guess I understand
<tos9> that yeah probably works for me
<cfbolz> Cool :-)
<tos9> cfbolz: thanks :)
ebarrett has quit [Quit: WeeChat 2.6]
ebarrett has joined #pypy
i9zO5AP has quit [Quit: WeeChat 2.5]
tsaka__ has joined #pypy
lritter has joined #pypy
tsaka__ has quit [Quit: Konversation terminated!]
tsaka__ has joined #pypy
tsaka__ has quit [Ping timeout: 252 seconds]
lritter has quit [Read error: Connection reset by peer]
lritter has joined #pypy
ssbr` has quit [Remote host closed the connection]
<kenaan> mattip default a5cd0e93d2fb /lib_pypy/_cffi_ssl/_stdssl/__init__.py: adapt patch from portable-pypy (thanks squeakypl)
<kenaan> mattip py3.6 d7d279c89e1c /: merge default into py3.6
<kenaan> mattip py3.6 adc2921362d4 /pypy/module/cpyext/: test, implement PyOS_FSPath
<mattip> lazka (for the logs): two for the price of one: both fixed finding ssl certificates and PyOS_FSPath :)
<kenaan> rlamy py3.6-exc-info-2 abfb925c6fc6 /: Close branch py3.6-exc-info-2
<kenaan> rlamy py3.6 357a713081c3 /pypy/: Merged in py3.6-exc-info-2 (pull request #686) Fix handling of sys.exc_info() in generators
<kenaan> mattip py3.6 31b34a79660c /pypy/doc/whatsnew-pypy3-head.rst: document merged branch
<mattip> probably a bit silly to document the merged branch since it will be part of 7.3.0rc2, but did it anyway
jvesely has joined #pypy
Rhy0lite has joined #pypy
tsaka__ has joined #pypy
adamholmberg has joined #pypy
<arigato> mattip: re test_and_mask_common_patterns, of course it works for me on x86 32-bit, and I don't know how gcc can have anything to do with it
<arigato> is there a way to log inside the docker image?
<mattip> it is pretty easy to get a shell prompt inside the image, you just run it with /bin/bash on the end of the "docker run ..." command
<mattip> what would you like to log?
<mattip> I fear it is an interaction between docker and the boehm gc, because of that warning message
<arigato> assuming I know nothing about docker, is there a machine I should log into and which command do I need to type exactly?
<mattip> I put a bit of instructions at the top of the buildbot/docker/Dockerfile32 file how to build the image and run it
<mattip> it takes about 20 minutes to build, or you can log into the benchmarker machine (bencher4 is still off line)
<mattip> the bottom line is, from a checkout of buildbot
<mattip> docker build -t buildslave_i686 --build-arg BUILDSLAVE_UID=$UID -f docker/Dockerfile32 docker
<mattip> then wait till it is done building the image
<arigato> OK
<mattip> then docker run -it -v<abspath/to/pypy/checkout/dir>:/build_dir -v/tmp:/tmp buildslave_i686 /bin/bash
<mattip> that will put you inside the docker image with a root shell, you will want to cd to the pypy checkout and su as the user you set the UID for when building
<mattip> su buildslave
<mattip> cd /build_dir
<mattip> then you are good to test, debug, whatever
<mattip> python2 pytest.py ...
<mattip> the -v flags map directories outside to directories inside, the -it means interactive, and map a tty for io
Ai9zO5AP has joined #pypy
junna has left #pypy [#pypy]
<mattip> maybe the UID thing is not clear. The docker build creates a user inside named buildslave with the UID that you fed in during "docker build"
<mattip> so when you do "su buildslave", effectively you map buildslave (inside the docker) to the user with the UID outside the docker
<mattip> which is why setting it to $UID does magic
adamholmberg has quit [Remote host closed the connection]
squeaky_pl has joined #pypy
<squeaky_pl> mattip, about SSL patch you didn't cover the case when certificate store is not present on the platform, sometimes people would install portable pypy on bare bones Ubuntu that don't have those and it was practical to ship certificate store inside the build as a last resort for those people.
<mattip> yes, thanks for the explaination. I was wondering what the use case is for that.
<mattip> What does CPython do in that case?
<squeaky_pl> Nothing, CPython always relies on "system SSL" and it's essentially broken when there is no database store.
<squeaky_pl> Well it's up to you to decide but barebones docker Ubuntu does not come with SSL store.
<mattip> which image? I will try it out
<squeaky_pl> Let me check
marky1991 has joined #pypy
<mattip> maybe we could raise an exception with a nicer message telling them how to get a ssl store
<mattip> although poking around in the pip sources I see they vendor a cacert.pem in
<mattip> so maybe I am just being paranoid
<squeaky_pl> Yes, it's been practical approach, it depends on how much portable you want to be but i did it because I received a ticket
<squeaky_pl> There's also the patch in sysconfig you should get
<squeaky_pl> Otherwise in presence of system OpenSSL and installing cryptography you would get explosions
<squeaky_pl> There's a ticket somewhere about it
<squeaky_pl> it was some epic openssl symbol mismatch
<squeaky_pl> you should always compile packages against the libraries you bundled before system ones
YannickJadoul has joined #pypy
<mattip> hmm. What if I want to recompile _cffi_ssl with system libraries to update it or to use the native one?
<mattip> maybe this was when we had a rpython _ssl built into pypy
<squeaky_pl> Well it seems to me running against two versions of openssl in the same process is a really bad idea
<mattip> agreed
adamholmberg has joined #pypy
<Dejan> I wonder can openssl or libssl be statically linked into the pypy
<squeaky_pl> this is what happens when compiling cryptography and system openssl headers are picked up
<squeaky_pl> https://github.com/squeaky-pl/portable-pypy/issues/9#issuecomment-91303693 this is the issue that got me on track and i modified distutils to always pick up bundled libraries first
<Dejan> well, bot "can" and "may" :) as it may not be possible legally
<mattip> hmm. The way I set it up the lib_pypy/_pypy_openssl....so *is* statically linked to libopenssl
<mattip> this https://www.openssl.org/source/license.html points to the valid license for the version we use as https://www.openssl.org/source/license-openssl-ssleay.txt
<mattip> which does not seem to discriminate between static or dynamic linking
<mattip> I can try out the scenario that tripped up the reporter in issue 9, maybe now that libpypy itself does not link to openssl the problem has gone away
<squeaky_pl> mattip, about no cert store for example in this case https://gist.github.com/squeaky-pl/c8b569ba2c23e22c78a85f600810ed32, note that you cannot have curl or wget installed since they would trigger cert store install
<squeaky_pl> mattip, it can be entirely true that it is no longer valid in 2019
<mattip> squeaky_pl: thanks for looking this over, it needs a good review
<squeaky_pl> it's questionable of course if this is a valid use case but i got people complaining, there are some people that strip their docker images to bare minimum
<Dejan> I build lost of my Docker images from scratch
<squeaky_pl> but of course you need put a line somewhere, i considered including cert store in the build not too much of a hassle so i went for it
<mattip> since we have ensurepip, which has a vendored store so pip can function, people can do "pip install certifi"
* mattip off for a bit, bbl
<squeaky_pl> You can just decide on something and see how many people complain, I often used that as a factor
<squeaky_pl> @mattip, for logs have you checked if idle works
<squeaky_pl> (this assumes that people have installed X server of course but a lot of people expected it to work)
squeaky_ has joined #pypy
squeaky_pl has quit [Read error: Connection reset by peer]
squeaky__ has joined #pypy
squeaky_ has quit [Ping timeout: 265 seconds]
jvesely has quit [Quit: jvesely]
dddddd has joined #pypy
<arigato> I'm failing to use gdb inside the docker
<arigato> that's quite annoying to debug jit issues
<Alex_Gaynor> arigato: you'll need `--add-caps SYS_PTRACE` or something like that IIRC
<arigato> where? to "docker run"?
<Alex_Gaynor> yes
<Alex_Gaynor> `--cap-add=SYS_PTRACE` I guess
olliemath has joined #pypy
<arigato> no
<arigato> arigo@baroquesoftware:~/hg/buildbot$ docker run -it -v/home/arigo/pypysrc:/build_dir -v/tmp:/tmp buildslave_i686 --add-caps=SYS_PTRACE /bin/bash
<arigato> linux32: unrecognized option '--add-caps=SYS_PTRACE'
<Alex_Gaynor> arigato: I assume it's currently failing for you with "ptrace: operation not permitted", if it's failing with something else, I have no idea. that's how gdb usually fails for me with docker though
<Alex_Gaynor> arigato: try making it the first argument after `run`?
<arigato> arigo@baroquesoftware:~/hg/buildbot$ ot$ docker run --add-caps=SYS_PTRACE -v/home/arigo/pypysrc:/build_dir -v/tmp:/tmp buildslave_i686 /bin/bash
<arigato> unknown flag: --add-caps
<arigato> See 'docker run --help'.
<Alex_Gaynor> arigato: ah, it's `--cap-add`, not `add-caps`, sorry
<arigato> thanks
<arigato> yes, works better
adamholmberg has quit [Remote host closed the connection]
<Alex_Gaynor> cool
<kenaan> rlamy py3.6 8e5e71e1a26e /pypy/objspace/std/: Return W_IntObject from float.__round__() when possible. This should speed up all calculations involving int(round(<...
<bbot2> Started: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/7874 [ronan: force build, py3.6]
xyz111112 has joined #pypy
olliemath has quit [Remote host closed the connection]
olliemath has joined #pypy
olliemath has quit [Remote host closed the connection]
xyz111112 has quit [Remote host closed the connection]
squeaky__ has quit [Ping timeout: 265 seconds]
<arigato> mattip: it's entirely obscure
<arigato> it seems that the segfault disappear if I replace the value 0xAAAAAAAAAAAAA with a smaller value that actually fits inside 32 bits
<arigato> ah!
<arigato> for some reason, this "long" is implicitly typed as SignedLongLong
<arigato> so what occurs is that the assembly code is called with the wrong arguments
<arigato> argh this is bound to create similar obscure issues
<arigato> I'll try to raise when lltype.typeOf(0xAAAAAAAAAAAAA) is called on 32-bit, instead of deciding that returning SignedLongLong makes sense
olliemath has joined #pypy
<olliemath> hi - I'm trying to run the tests for a pypy3.6 branch, but without success - I wondered if you had any tips?
<olliemath> currently ./run_pytest.py lib-python/3/test/test_datetime.py fails in both py2 and py3 venvs
<kenaan> arigo default 56cb51f3c081 /rpython/: Prevent lltype.typeOf(2<<32) from returning SignedLongLong on 32-bit just because 2<<32 doesn't fit into a regular ...
<ronan> olliemath: run 'python -m test.test_datetime' in a pypy3 venv
squeaky__ has joined #pypy
<ronan> also what's run_pytest.py??
<olliemath> @ronan that worked - thanks!
<olliemath> it's a script in the top level of the repo
<olliemath> I assumed it was doing something specific to the repo (e.g. hacking the pythonpath)
squeaky_pl has joined #pypy
squeaky__ has quit [Ping timeout: 240 seconds]
<olliemath> Maybe I should make a PR to update docs as my command was taken from there (with a change of `2.7` -> `3`)
<arigato> olliemath: do you mean "./pytest.py" instead of "./run_pytest.py"?
<arigato> at least I am not figuring out how you got "./run_pytest.py lib-python/xxx" from this web page
<olliemath> ah yes, sorry - that was a typo!
<olliemath> `./pytest.py` was what I was runing
<olliemath> in a pypy3 venv it gives `KeyError: local('/home/oliver/Projects/pypy/lib-python/3/test/test_datetime.py')`
<olliemath> which then leads `exec compile2(source) in self.miniglobals, d`
<olliemath> `SyntaxError`
<ronan> right, the buildbots have a complicated way of running these under pytest
<arigato> I've updated the "contributing.html" page
<ronan> but there isn't much point in doing that locally
<kenaan> arigo default b0f74f864c1d /pypy/doc/contributing.rst: Mention that you should usually not run "py.test lib-python/..."
<olliemath> right - I understand now - that's helpful
i9zO5AP has joined #pypy
YannickJadoul has quit [Quit: Leaving]
Ai9zO5AP has quit [Ping timeout: 250 seconds]
squeaky_pl has quit [Ping timeout: 265 seconds]
marky1991 has quit [Remote host closed the connection]
marky1991 has joined #pypy
marky1991 has quit [Ping timeout: 265 seconds]
antocuni2 has quit [Quit: Leaving]
dddddd has quit [Ping timeout: 250 seconds]
dddddd has joined #pypy
jacob22_ has quit [Quit: Konversation terminated!]
<bbot2> Failure: http://buildbot.pypy.org/builders/own-linux-x86-64/builds/7874 [ronan: force build, py3.6]
olliemath has quit [Remote host closed the connection]
adamholmberg has joined #pypy
squeaky_pl has joined #pypy
Taggnostr has quit [Ping timeout: 250 seconds]
Taggnostr has joined #pypy
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
Rhy0lite has quit [Quit: Leaving]
jacob22 has joined #pypy
jacob22 has quit [Client Quit]
jacob22 has joined #pypy
<arigato> Kk sorry, my latest change breaks random things on 32bit. Will fix
marky1991 has joined #pypy
squeaky_pl has quit [Read error: Connection reset by peer]
squeaky_pl has joined #pypy
<kenaan> arigo py3.6 22226b5e0778 /pypy/: more like 8e5e71e1a26e: avoids making W_LongObjects for result that most often fit in a W_IntObject
lritter has quit [Quit: Leaving]
marky1991 has quit [Ping timeout: 265 seconds]
glyph has quit [Excess Flood]
glyph has joined #pypy
ssbr` has joined #pypy
hsaliak has quit [Ping timeout: 276 seconds]
hsaliak has joined #pypy
adamholmberg has quit [Remote host closed the connection]