<simpson> TheMontyChrist: People ask for one every so often. You have a need for a massive heap on Win64?
<TheMontyChrist> given python is slow probably not
<TheMontyChrist> was thinking of loading 8+gb of data
<TheMontyChrist> but when I think about it
<TheMontyChrist> I don't think it's up for the job
<TheMontyChrist> nvm
yuyichao has quit [Ping timeout: 240 seconds]
<simpson> TheMontyChrist: Sounds like you don't want to try new things.
lritter has quit [Ping timeout: 260 seconds]
ramonvg has quit [Ping timeout: 255 seconds]
johncc3_ has joined #pypy
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
yuyichao has joined #pypy
tbodt has joined #pypy
vkirilichev has joined #pypy
jcea has quit [Quit: jcea]
TheMontyChrist has quit [Quit: Page closed]
asmeurer__ has joined #pypy
asmeurer__ has quit [Quit: asmeurer__]
johncc3_ has quit [Ping timeout: 260 seconds]
johncc3_ has joined #pypy
kipras is now known as kipras`away
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
asmeurer__ has joined #pypy
ArneBab has joined #pypy
ArneBab_ has quit [Ping timeout: 260 seconds]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
asmeurer__ has quit [Quit: asmeurer__]
derwolfe has left #pypy [#pypy]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
tbodt has joined #pypy
tbodt has quit [Client Quit]
tbodt has joined #pypy
asmeurer__ has joined #pypy
oberstet2 has joined #pypy
oberstet3 has quit [Ping timeout: 245 seconds]
asmeurer__ has quit [Quit: asmeurer__]
mattip has joined #pypy
<mattip> hi
<mattip> buildbots are showing a regression in own testing, http://buildbot.pypy.org/summary?branch=%3Ctrunk%3E
<mattip> maybe after merge of "branch-prediction"?
inhahe_ has quit [Read error: Connection reset by peer]
inhahe_ has joined #pypy
jacob22 has quit [Quit: Konversation terminated!]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
vkirilichev has quit [Ping timeout: 260 seconds]
jamadden has quit [Quit: Leaving.]
bogner has quit [Ping timeout: 240 seconds]
bogner has joined #pypy
mattip has left #pypy ["bye"]
vkirilichev has joined #pypy
realitix has joined #pypy
forgottenone has joined #pypy
brechtm has quit [Remote host closed the connection]
vkirilichev has quit [Ping timeout: 252 seconds]
brechtm has joined #pypy
arigato has joined #pypy
kenaan_ has joined #pypy
<kenaan_> arigo pypy.org[extradoc] 3ab96624f0d2 /don1.html: update the values
arigato has quit [Quit: Leaving]
arigato has joined #pypy
DragonSA has joined #pypy
vkirilichev has joined #pypy
vkirilichev has quit [Ping timeout: 240 seconds]
oberstet2 has quit [Ping timeout: 258 seconds]
antocuni has joined #pypy
marr has joined #pypy
vkirilichev has joined #pypy
ramon has joined #pypy
ramon is now known as Guest77387
oberstet2 has joined #pypy
<haypo> ronan: morning! i ran performance benchmarks on speed-python: 250 values per run x 10 runs (run=process) -- http://www.haypocalc.com/tmp/pypy_p10_w0_n250.json.gz
<haypo> ronan: i will use these data to choose the number of warmups
<haypo> ronan: you can use my new plot.py script to plots these values
<haypo> ronan: example "python3 doc/examples/plot.py ~/pypy_p10_w0_n250.json.gz -b telco --split-runs --skip=1" gives http://www.haypocalc.com/tmp/telco.png
Guest77387 is now known as ramonvg
<arigato> nice regular spikes
<haypo> arigato: lovely, isn't it? :-)
<haypo> arigato: for this one, i think that i need to compute enough values per process to "smooth" the spikes
<haypo> it's a cycle of 7 values
<haypo> it seems like using first 32 values for warmup should be enough to reach the "steady state"
<haypo> i think that we should compute at least 5 cycles, so 35 values per process
<haypo> but i don't have time to analyze everything right now, i would prefer to do that next week ;)
oberstet2 has quit [Ping timeout: 260 seconds]
johncc3_ has quit [Ping timeout: 240 seconds]
<antocuni> haypo: where do I find your plot.py?
vkirilichev has quit [Ping timeout: 258 seconds]
<haypo> currently, it's just an example in the doc: http://perf.readthedocs.io/en/latest/examples.html#plot
<antocuni> thanks
commandoline has quit [Quit: Bye!]
<haypo> i started to play with the changepoint package of R
<haypo> but i don't get what I want, it seems like i don't pass the right parameters
<antocuni> haypo: sorry, I don't understand how I am supposed to generate ~/pypy_p10_w0_n250.json.gz :)
<haypo> antocuni: ah, you want to regenerate it? it took a whole night to compute it
<antocuni> ouch
<antocuni> well, maybe just a subset
<haypo> antocuni: ~/pypy2-v5.7.1-linux64/bin/pypy -m performance run -o pypy_p10_w0_n250.json -v
<haypo> antocuni: i used this command
<haypo> antocuni: oh
<haypo> antocuni: wait, i had to modify the performance module :)
<antocuni> basically, I wanted to see what happens if you do "pypyjit.set_param('off')" after some iterations
<haypo> antocuni: by default, performance tries to be nice and chooses parameters for you, but in my case, i wanted to always use: 10 processes, 0 warmup, 250 values (per process)
johncc3_ has joined #pypy
<haypo> antocuni: more generally, to use performance, there is a doc! http://pyperformance.readthedocs.io/usage.html
<antocuni> so, in the plot you pasted above, there are 10 different lines, one per process?
<haypo> antocuni: yes, one per process
<haypo> antocuni: i expected to see different results per process, and it's the case, even if i didn't reboot between each run
<antocuni> right
<haypo> antocuni: using --skip, you can ignore the first N values per run. on the go benchmark, one run was faster. it seems that we reached the steady state after 58 values: http://www.haypocalc.com/tmp/go.png
<haypo> haypo@selma$ python3 doc/examples/plot.py ~/pypy_p10_w0_n250.json.gz -b go --split-runs --skip=58
<haypo> at least, it confirms that it's good idea in perf to use multiple processes :-D
<antocuni> yeah, indeed. Do you see such a high variation also for CPython, or only pypy?
<haypo> antocuni: various between two processes? some performance microbenchmarks have medium variation between runs on CPython, but I removed microbenchmarks yesterday :-D
<haypo> antocuni: all CPython data (compressed JOSN files) are available at https://github.com/haypo/performance_results
<haypo> JSON*
<antocuni> wow, you are doing a very nice job, congrats
<haypo> to be honest, i didn't looked at CPython individual runs in depth
<haypo> antocuni: well, PyPy benchmark already produced a JSON file, but only stored the result
<haypo> antocuni: for me, it's important to store *all* data to allow deep analysis later
<haypo> antocuni: for example, i modified the "perf stats" command to count the number of outliers. it's now possible to compute that on *old* JSON files, without having to recompute these data
<haypo> which is nice since it takes 1 hour to compute a JSON file on CPython :)
<haypo> performance_results/2017-03-31-cpython/ contains 44 files, so it took something like 44 hours to compute all data :-p
<haypo> antocuni: ronan noticed some variation between values in the same process, like http://pyperformance.readthedocs.io/benchmarks.html#html5lib & http://pyperformance.readthedocs.io/benchmarks.html#sympy
<haypo> antocuni: ^^ on CPython
<antocuni> ok, managed to start my benchmark run, finally :)
<haypo> antocuni: if you only view data through an histogram, telco on CPython seems "perfect": nice gaussian curve, http://www.haypocalc.com/tmp/cpython_telco_hist.png
<haypo> antocuni: but now i hate you because i looked at invidial runs, and it's a mess: http://www.haypocalc.com/tmp/cpython_telco_runs.png
<haypo> oops, doc/examples/hist_scipy.py is outdated, it used median instead of mean
oberstet2 has joined #pypy
<haypo> antocuni: "wow, you are doing a very nice job, congrats" thank you :) i'm now trying to convince "you" (PyPy devs) to use my tools ;)
<antocuni> haypo: I'm puzzled; I modified performance/run.py by adding " cmd.extend(('-w0', '-p1', '-n100'))"
<antocuni> and I ran "pyperformance run -b telco -p `which pypy` -o pypy-telco.json"
<antocuni> but when I plot it, it says it has only 60 values
<antocuni> I expected 100
<haypo> antocuni: do you see these parameters passed on printed commands: "Running ..." ?
<antocuni> INFO:root:Running `/home/antocuni/pypy/perf/venv/pypy2.7-54f5305eeb69/bin/python -u /home/antocuni/pypy/perf/venv/pypy2.7-54f5305eeb69/site-packages/performance/benchmarks/bm_telco.py --output /tmp/tmpd1LxtL`
<haypo> antocuni: ah wait, there is a trick :)
<antocuni> uhm, seems now
<antocuni> *not
<haypo> antocuni: performance starts by installing itself :-)
<antocuni> ah :(
<haypo> antocuni: depending how you first installed performance on your system, it installs its local copy or download a fresh copy from PyPI
<antocuni> I applied the patch directly on the copy in my site-packages
<haypo> antocuni: if you want to hack performance, i suggest you to use 3 steps
<haypo> antocuni: 1) performance ... venv create
<haypo> antocuni: 2) hack performacen in the created virtual environment
<haypo> antocuni: 3) run the benchmark
jcea has joined #pypy
<antocuni> ok, I'll do
<antocuni> I have to run now
<antocuni> thanks!
<haypo> antocuni: performance creates a virtualenv to get a known list of modules. otherwise, some magic .pth files can have an impact on benchmarks
vkirilichev has joined #pypy
<haypo> antocuni: for example, slowdown python startup, which is measured by python_startup benchmark
brechtm_ has joined #pypy
brechtm has quit [Read error: Connection reset by peer]
antocuni has quit [Ping timeout: 255 seconds]
forgottenone has quit [Quit: Konversation terminated!]
forgottenone has joined #pypy
mattip has joined #pypy
brechtm has joined #pypy
brechtm_ has quit [Ping timeout: 252 seconds]
<mattip> finally, I got a segfault to happen only with threading and numpy, no cython needed
* mattip updating issue 2530
ramonvg has quit [Ping timeout: 260 seconds]
tormoz has joined #pypy
jamadden has joined #pypy
forgottenone has quit [Quit: Konversation terminated!]
forgottenone has joined #pypy
<LarstiQ> haypo: reading backlog, seems you're getting in a good spot
forgottenone has quit [Quit: Konversation terminated!]
forgottenone has joined #pypy
demonimin has quit [Remote host closed the connection]
commandoline has joined #pypy
demonimin has joined #pypy
jamesaxl has quit [Ping timeout: 260 seconds]
johncc3_ has quit [Ping timeout: 258 seconds]
jamesaxl has joined #pypy
brechtm has quit [Remote host closed the connection]
brechtm has joined #pypy
johncc3_ has joined #pypy
antocuni has joined #pypy
vkirilichev has quit [Read error: Connection reset by peer]
vkirilic_ has joined #pypy
adamholmberg has joined #pypy
vkirilic_ has quit [Read error: Connection reset by peer]
vkirilichev has joined #pypy
jacob22_ has joined #pypy
forgottenone has quit [Quit: Konversation terminated!]
forgottenone has joined #pypy
vkirilichev has quit [Remote host closed the connection]
girish946 has joined #pypy
forgottenone has quit [Quit: Konversation terminated!]
demonimin_ has joined #pypy
demonimin_ has joined #pypy
forgottenone has joined #pypy
demonimin has quit [Ping timeout: 260 seconds]
<antocuni> I run the telco benchmarks three times: 1) normally; 2) I disabled the jit after 100 iterations; 3) I called gc.collect *before* each iteration
<antocuni> (I wrote my own hackish runner because I could not find a way to modify perf and/or pyperformance to do what I wanted
<antocuni> by looking at the graph I think we can see that:
<antocuni> 1) the spikes are caused by gc collections, NOT jit compilations
<antocuni> 2) if we disable the JIT, the performance drop. This is a bit unexpected: maybe it means that there is some guard which constantly fail and thus cause the JIT to compile again and again the same code paths?
* antocuni tries an additional run with both gc.collect and jit-off-after-100
<haypo> antocuni: ah, i forgot to explain you that you don't need performance to run bm_telco.py. it's a standalone script. the script directly accepts -w0 -p10 -n250 options
<haypo> antocuni: see http://perf.readthedocs.io/en/latest/runner.html for all options, they are many of them :)
commandoline has quit [Quit: Bye!]
<haypo> antocuni: i'm not sure that calling gc.collect() is "correct"
<antocuni> haypo: sure, it is not correct at all
<haypo> i should describe somewhere what i want from performance
commandoline has joined #pypy
<antocuni> but it's a very different thing than JIT warmup: in case of the JIT, we can assume that after a (maybe arbitrarily long) warmup phase the performance stabilizes
<haypo> in short, benchmarks should be representative of "real" applications and be run as users run real code
<antocuni> if it's the GC, it means that the GC cost should be spread all over the iterations
<haypo> antocuni: the question is more why a GC collection is needed. GC is only supposed to be required to break cycles, no?
<antocuni> haypo: in pypy not at all
<antocuni> the gc runs constantly
<haypo> so, it looks closer to a bug in the decimal module
<haypo> antocuni: i mean, if you don't have cycles, the GC is supposed to do nothing, no?
<antocuni> no
<antocuni> pypy doesn't have refcount
<haypo> hum, ok
<antocuni> the only way to claim memory is by running the GC
<antocuni> and we have two phases: minor collections (which run often, probably multiple times during the execution of a benchmark)
<antocuni> and major collections, which are slower and runs less often
<antocuni> I bet that the spikes we see are because a GC major collection happen to run every N iterations
<haypo> antocuni: yeah, i wouldn't be surprised to see a correlation between GC major collection and spikes
<antocuni> yeah
<antocuni> basically, what I wanted to say is that in this particular case, the benchmark probably DO warm up, and so it is fine to take the average after N iterations
<arigato> ("the only way to reclaim memory is by running the GC" => note that 80% or 90% of the objects are dead already at a minor collection, and the minor collection algorithm needs some steps per *alive* object, making reclaiming free objects exactly zero cost)
<antocuni> arigato: sure, I was talking about the GC in general, not only major collections
<arigato> (i.e. it's more efficient than calling malloc() and free() even without counting the overhead of the reference counter in CPython)
<arigato> just want to make sure haypo doesn't get the wrong impression :-)
<antocuni> ok
necaris has joined #pypy
necaris is now known as necaris[away]
<arigato> "the GC" inside CPython is a very particular beast from the general GCs elsewhere
necaris[away] is now known as necaris
<antocuni> anyway, all of this is another hint that we cannot use a single number to represent "pypy speed for a given benchmark"
* arigato didn't fully read the conversation
<haypo> antocuni: "it is fine to take the average after N iterations" hum, it's more a requirement than just being fine
<haypo> antocuni: i want to include spikes in the result
<haypo> antocuni: we had long and painful discussions about mean vs median for exemple, and at the end, i decided to choose mean
<antocuni> I know, I'm not saying that it's wrong
Rhy0lite has joined #pypy
<antocuni> but for example, it means that you have to distinguish "JIT spikes" vs "GC spikes" to compute the warmup
<haypo> antocuni: ah, it should be easy to distinguish them: just disable the GC as you did, no?
<antocuni> yes, although you cannot "disable" the GC (else you run out of memory pretty quickly). What I did was to force a major collection before the run, to make sure that it didn't happen by chance inside the benchmark
<haypo> antocuni: ah yes, that's different and more reliable :)
<haypo> antocuni: gc.disable() can also behave differently :-)
<antocuni> this works because this particular benchmark does not allocate much memory. If a benchmark allocates a lot of memory, it might cause a full major collection on its own. But running gc.collect() before ensures that it always start from a "clean state"
<antocuni> another way to see this is: in PyPy, in general, allocating memory costs a bit of time, but the cost is delayed and you see it only when the major collection occurs
<antocuni> (the allocation itself is very quick; the "cost" is given by the fact that the more you allocate, the more often the GC runs)
<antocuni> although of all this is very imprecise of course. In particular, if the object dies quickly enough, it is collected in the minor collection and thus does not affect the speed of the next major collection
<antocuni> but in general, it is correct enough to say that "the more you allocate, the more you spend later in the GC"
girish946 has quit [Ping timeout: 255 seconds]
vkirilichev has joined #pypy
<arigato> I think it's wrong to call gc.collect() and not put it in any time
<antocuni> arigato: I agree. This was just to show that the spikes were caused by the GC, not by the JIT
<haypo> arigato: sorry, what do you mean by "not put it in any time"
girish946 has joined #pypy
brechtm has quit [Read error: No route to host]
brechtm has joined #pypy
johncc3_ has quit [Ping timeout: 255 seconds]
<haypo> d
<kenaan_> arigo default b13b7c8e4fe7 /rpython/translator/c/src/debug_print.c: Call fflush() after writing an end-of-section to the log file. Hopefully, this should remove the constant problem t...
<arigato> haypo: "not account for it"
<haypo> antocuni, arigato : i just wrote http://pyperformance.readthedocs.io/usage.html#what-is-the-goal-of-performance to write down what i want :)
lritter has joined #pypy
<haypo> so: gc enabled, no gc.collect(), ASLR enabled, ignore warmups, etc.
<haypo> i also write it for ronan who wants to include warmups :)
johncc3_ has joined #pypy
vkirilichev has quit [Ping timeout: 240 seconds]
brechtm_ has joined #pypy
brechtm has quit [Ping timeout: 252 seconds]
yuyichao has quit [Ping timeout: 258 seconds]
<mattip> arigato: ping (buildbot own test failures)There are some own test failures on default, linux 32/64 - stress tests
<arigato> mattip: ouch
<arigato> looks likely to be branch-prediction
<mattip> likely, but painful to find
DragonSA has quit [Quit: Konversation terminated!]
<mattip> I just wanted to point it out before the last good version goes off buildbot reports
<mattip> s/reports/summary/
<arigato> ah, no
<arigato> found it
<arigato> yes, thanks
yuyichao has joined #pypy
inhahe_ has quit [Read error: Connection reset by peer]
inhahe_ has joined #pypy
<mattip> maybe progress - in gdb I found a live PyObject with refcount== REFCNT_FROM_PYPY
<mattip> which AFAICT should not happen
<kenaan_> arigo default f0ba81de1e4f /rpython/jit/backend/x86/codebuf.py: Fix for untranslated tests
* mattip bye
mattip has left #pypy ["bye"]
<arigato> mattip (logs): it happens if there is no ref from CPython, only from PyPy
antocuni has quit [Ping timeout: 252 seconds]
marky1991 has joined #pypy
realitix has quit [Quit: Leaving]
vkirilichev has joined #pypy
John has joined #pypy
<John> hi all
<arigato> hi
<John> it seems that when reading from the sys.stdin, even if i use os.read(sys.stdin.fileno(),4) i cannot guarentee buffering is turned off
<John> and that only 4 bytes will actually be read
<danchr> odd; I get an exception saying "can't use a named cursor outside of transactions" when running my django app with psycopg2cffi
<danchr> if I just delete the check that raises the exception, everything appears to work
<arigato> John: os.read(_, 4) should never return more than 4 bytes, AFAIK
<John> arigato: it only reads 4 bytes, but it seems to read more from _ than 4, and presumably stores them in a buffer somewhere
<John> using "python -u" when running the program doesn't seem to turn buffering off either
<arigato> os.read() is directly calling the OS read() function. if that fails, then that's strange
<John> It's not failing, it's just buffering
<John> i will make a demo :)
<arigato> please do, I don't believe you :-) i.e. I think there is a different issue
<John> hahah, most likely :)
<John> So far, my demo has been unable to reproduce :P
<nimaje> John: is stdin the terminal or something else?
<John> The script is being run via "cat myfile | python myscript.py"
<arigato> how do you know there is buffering or not, in this situation?
<arigato> the pipe alone contains an OS-internal buffer, and cat also reads and writes in chunks
<John> right, right - and that's probably OK
<John> myscript will read the first 4 bytes of whatever it's getting via stdin, and then pipe the rest to a subprocess
<arigato> ah
<John> and what "the rest" is sees to be somewhat random. usually it's the 5th byte and on, but occasionally it's something else
<arigato> do you know it's something else from later in the pipe, or could it also be from earlier---i.e. os.read(_, 4) in the parent returned less than 4 bytes?
jamesaxl has quit [Read error: Connection reset by peer]
vkirilichev has quit [Ping timeout: 240 seconds]
<nimaje> John: why don't you use 'tail -c +4'?
<nimaje> s/+4/+5/
jamesaxl has joined #pypy
<John> arigato: Good idea, but I don't think that's it as i'm checking that the four bytes are what they are supposed to be
<arigato> ok
<John> nimaje: i'm reading the first 4 bytes from stdin, and if it's a GZIP file, subprocess gzip, if it's a XYZ file, subprocess xyz, etc
<John> So it's not about avoiding those four bytes, it's about reading them, then deciding what to subprocess :)
<nimaje> ok, does the subprocess don't need those bytes?
<John> it does! :D That was the hardest bit about this code
<John> the subprocess is: " { printf "abcd"; cat; } | gzip "
<John> Which reliably gives gzip the four bytes 'abcd' before the stdin
<John> But the stdin is not reliably the 5th + btyes
<John> It's really really ugly, but without a way to read from the stdin without changing it, or even see whats in python's buffers, it's really tricky to find a better way
girish946 has quit [Quit: Leaving]
<John> I found the cause! And it's super weird xD
<John> Somehow, making a call to subprocess is the problem
<John> I will make the demo
<John> Switching bug from False to True turns the bug on
<arigato> ah, "stdin=sys.stdin" I guess fails
<arigato> you should try without that
<John> You mean rename it?
<arigato> no, leave out the argument completely
<John> oh, but i need it :(
<arigato> the stdin of the subprocess is by default the same as your own stdin
<John> I'm sending the subprocess the stdin of the python program
<John> ahhh
<John> hehe, cool - your right it's not needed
<John> but unfortunately it still gives the bug :/
<arigato> ok
<John> To demo the code yourself, i am running this like "cat some_file.gz | pypy python_buffer.py"
<John> But you have to run it 10+ times until you see the output randomly change
<John> but only when bug = True
<John> with bug = False the output is always the same
<John> well, i tried 100 times :P
<John> Maybe a flying spagetti monster will break it on the 101st time ;)
<arigato> does it work if you don't use subprocess at all? like, os.execvp("bash", ["bash", "-c", "the-same-command-line"])
<arigato> ah
<arigato> wait a sec, what are you doing in the "if bug"?
<arigato> you call gzip and immediately kill it??
<arigato> of course if you're unlucky it can start reading some bytes
<arigato> from the same stdin
<nimaje> he doesn't start the process or do I understand something wrong?
<John> It appears to work with os.execvp
<John> arigato: ahhhhhh
<John> you're a genius :D
<John> That's probably it
<John> the if part is to see if gzip is installed
<John> (because the other process uses shell=True)
<John> (because it uses { } and | )
<John> Woop! That fixes it!
<John> stdin=DEVNULL fixes it
<arigato> :-)
<John> So this only happens on PyPy, on CPython it doesn't seem to happen
<John> But i don't care - it now works on both <3
<John> Thanks arigato
<arigato> I'm sure you're lucky on CPython
DragonSA has joined #pypy
DragonSA has joined #pypy
DragonSA has quit [Changing host]
<arigato> e.g. on PyPy maybe it happens only if there is a pause of a fraction of a millisecond at the wrong point
<John> Yeah, my first thought was that CPython is probably too slow
<John> lol
<arigato> the opposite in this case---CPython probably reaches p.kill() consistently quickly
<John> I seeee~ interesting
DragonSA has quit [Client Quit]
John has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
vkirilichev has joined #pypy
ramonvg has joined #pypy
forgottenone has quit [Quit: Konversation terminated!]
untitaker has quit []
untitaker has joined #pypy
arigato has quit [Read error: Connection reset by peer]
arigato has joined #pypy
vkirilichev has quit [Ping timeout: 240 seconds]
brechtm_ has quit [Remote host closed the connection]
asmeurer__ has joined #pypy
asmeurer__ has quit [Client Quit]
forgottenone has joined #pypy
yuyichao has quit [Ping timeout: 258 seconds]
brechtm has joined #pypy
tbodt has joined #pypy
brechtm has quit [Ping timeout: 240 seconds]
yuyichao has joined #pypy
johncc3_ has quit [Ping timeout: 260 seconds]
<nanonyme> arigato, I'm personally a bit annoyed subprocess interface didn't have stdin=None, stdout=None, stderr=None to mean that they are all redirected to dev null
<nanonyme> (but instead None == same as default)
necaris is now known as necaris[away]
necaris[away] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<nanonyme> But yeah, no one said subprocess module was exactly good
jamesaxl has quit [Read error: Connection reset by peer]
jamesaxl has joined #pypy
johncc3_ has joined #pypy
antocuni has joined #pypy
necaris has joined #pypy
vkirilichev has joined #pypy
<kenaan_> antocuni extradoc c812f32e4682 /talk/ep2017/the-joy-of-pypy-jit.txt: my ep2017 proposal
<antocuni> arigato: ^^^ I submitted this proposal before I forget the deadline. If you have feedback or suggestions, please tell me :) (maybe tomorrow or by email because now I'm off)
antocuni has quit [Ping timeout: 268 seconds]
tbodt has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
vkirilichev has quit [Ping timeout: 240 seconds]
yuyichao has quit [Ping timeout: 260 seconds]
yuyichao has joined #pypy
asmeurer__ has joined #pypy
tbodt has joined #pypy
tbodt has quit [Read error: Connection reset by peer]
tbodt has joined #pypy
johncc3_ has quit [Quit: Leaving]
tbodt has quit [Ping timeout: 260 seconds]
asmeurer__ has quit [Quit: asmeurer__]
tbodt has joined #pypy
kipras`away is now known as kipras
vkirilichev has joined #pypy
Rhy0lite has quit [Quit: Leaving]
jamesaxl has quit [Read error: Connection reset by peer]
jamesaxl has joined #pypy
vkirilichev has quit [Remote host closed the connection]
vkirilichev has joined #pypy
black_ant has joined #pypy
black_ant has quit [Ping timeout: 260 seconds]
marky1991 has quit [Read error: Connection reset by peer]
black_ant has joined #pypy
black_ant has quit [Ping timeout: 252 seconds]
ramonvg has quit [Ping timeout: 240 seconds]
black_ant has joined #pypy
lritter has quit [Quit: Leaving]
black_ant has quit [Ping timeout: 258 seconds]
necaris is now known as necaris[away]
arigato has quit [Ping timeout: 252 seconds]
necaris[away] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
forgottenone has quit [Quit: Konversation terminated!]
necaris has joined #pypy
necaris is now known as necaris[away]
forgottenone has joined #pypy
vkirilichev has quit [Remote host closed the connection]
necaris[away] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
nimaje1 has joined #pypy
nimaje1 is now known as nimaje
nimaje has quit [Killed (verne.freenode.net (Nickname regained by services))]
forgottenone has quit [Quit: Konversation terminated!]
vkirilichev has joined #pypy
ramonvg has joined #pypy
adamholmberg has quit [Remote host closed the connection]
adamholmberg has joined #pypy
vkirilichev has quit [Ping timeout: 268 seconds]
adamholmberg has quit [Ping timeout: 252 seconds]
forgottenone has joined #pypy
asmeurer__ has joined #pypy
forgottenone has quit [Ping timeout: 260 seconds]
vkirilichev has joined #pypy
jamesaxl has quit [Quit: WeeChat 1.7]
Marqin has quit [Ping timeout: 264 seconds]
ramonvg has quit [Quit: Lost terminal]
asmeurer__ has quit [Quit: asmeurer__]
vkirilichev has quit [Remote host closed the connection]
forgottenone has joined #pypy
asmeurer__ has joined #pypy
adamholmberg has joined #pypy
adamholmberg has quit [Ping timeout: 252 seconds]
asmeurer__ has quit [Quit: asmeurer__]
asmeurer_ has joined #pypy
asmeurer_ has quit [Quit: asmeurer_]
forgottenone has quit [Quit: Konversation terminated!]
vkirilichev has joined #pypy