00:58
tos9 has joined #pypy
01:13
jcea has quit [Ping timeout: 250 seconds]
01:15
Gustavo6046 has joined #pypy
02:17
muke has quit [Quit: Leaving]
05:13
yuiza has joined #pypy
05:37
proteusguy has quit [Ping timeout: 260 seconds]
05:41
jacob22_ has quit [Read error: Connection reset by peer]
05:45
jacob22_ has joined #pypy
06:31
proteusguy has joined #pypy
08:39
Gustavo6046 has quit [Ping timeout: 260 seconds]
08:55
nulano has quit [Ping timeout: 246 seconds]
09:03
nulano has joined #pypy
09:44
lritter has joined #pypy
09:55
otisolsen70 has joined #pypy
09:55
<
mattip >
tumbleweed: from a cursory glance, it seems something is wrong with the networking on at least one of the runs.
10:33
<
cfbolz >
mattip: I'm trying to get the centos arm64 person on twitter to try the portable arm build
10:55
Gustavo6046 has joined #pypy
12:36
<
cfbolz >
I just profiled the C part of the pypy build
12:36
<
cfbolz >
with -j1 it takes 20 min on my laptop
12:37
<
cfbolz >
of that, 16 min are spend re-parsing forwarddecl.h again and again
12:40
<
mattip >
I looked into this a while ago and my conclusion was to only do it with MSVC, but I think even that bit suffered code rot
12:47
<
mattip >
that link suggests it only works for one header, but maybe that is all we need
12:47
<
mattip >
is forwarddecl.h the first header?
12:48
* cfbolz
experimenting a bit
12:50
<
nimaje >
where does it suggest that only one precompiled header is supported?
12:51
<
mattip >
"Only one precompiled header can be used in a particular compilation."
12:52
<
nimaje >
yes, just saw that
12:53
<
cfbolz >
mattip: it's probably doable by putting all the common header stuff into one file
12:54
<
cfbolz >
but that means changes to the backend a bit
12:54
<
cfbolz >
anyway, not sure I want to pursue this, but the numbers are quite extreme
13:03
<
fijal >
cfbolz: I tried digging there, but ended up with "meh, just use more processors"
13:04
<
cfbolz >
fijal: is that really true if 90% of the work is parsing the same headers?
13:05
<
fijal >
the total time of compilation is not significant enough with -j <a lot>
13:05
<
fijal >
I'm sure you can shave a few minutes
13:06
<
cfbolz >
Ok, I'll come back after we optimized the rest of rpython? 🤣
13:09
<
mattip >
in the hackmd I shared a few days ago, compilation is less than 15% of the translation time
13:09
<
mattip >
on both linux64 and arm64
13:10
<
fijal >
cfbolz: that ended up being my conclusion, I think
13:10
<
mattip >
what actually helped compilation time was reducing the number of C files
13:11
<
mattip >
we used to split things up alot more
13:12
<
cfbolz >
mattip: yeah, it helps because it reduces the number of times the headers are included ;-)
13:12
<
mattip >
on windows, compilation is 74/2038 secs, so maybe precompiled headers are working
13:13
<
mattip >
(about 3% of the total time)
13:14
<
cfbolz >
or maybe the windows compiler is simply faster
13:14
* cfbolz
goes back trying to understand the new bytecode anyway
13:15
<
cfbolz >
I'm getting pretty close! it actually makes some things nicer
13:15
<
mattip >
typical conversation on PyPy
13:15
<
mattip >
excited voice: "hey I just noticed Y, we should try X"
13:16
<
mattip >
core dev: "yeah, we tried that a while ago and X doesn't work"
13:16
<
cfbolz >
I am not convinced it wouldn't help. but then I hvae too many things on my stack 😅
13:24
jcea has joined #pypy
13:25
<
nimaje >
is there a reason for not trying catting the *.c files together and compile that?
13:26
<
Dejan >
I always used DigitalMars C/C++ on Windows becuase it compiles blazingly fast (same for DMD - the D compiler)
13:27
<
Dejan >
and once it is ready to release I used something else, depending on platform
13:38
gsnedder1 is now known as gsnedders
14:17
<
nulano >
FWIW, MSVC compilation does use a precompiled header
14:20
<
mattip >
nulano: thanks for checking
14:20
<
mattip >
nimaje: I think we would need more memory
14:21
<
mattip >
the c we create is very verbose
14:22
<
cfbolz >
We also just have a lot of it
14:22
<
mattip >
there was a branch once upon a time to make the C more readable, I don't remember if it reduced line count as well
14:24
<
nulano >
the generated c files are actually split by a line count limit of 65535, for VC++ 7.2 according to the comment
14:27
<
lazka >
(MSYS2 has ucrt support now, so you could build MSVC compatible binaries with gcc on Windows as well)
14:49
<
Dejan >
MSYS2 rocks, I run MSYS2 sshd on one EC2 Windows instance so that I can ssh into msys2 environment :)
14:49
<
Dejan >
when I need to build something for Windows, which is rare nowadays...
15:41
<
tumbleweed >
mattip: yeah, we don't assume Internet access
15:41
<
tumbleweed >
I should disable tests that require it, if I find them
16:11
Taggnostr has quit [Quit: Switching to single player mode.]
16:11
Taggnostr has joined #pypy
16:18
nulano has quit [Ping timeout: 268 seconds]
16:20
nulano has joined #pypy
16:52
<
cfbolz >
mattip: does the mmap error ring a bell for you?
16:53
<
cfbolz >
Maybe I should merge the 3.7 branch
17:02
<
Dejan >
cfbolz, when is the next "live" session? :)
17:02
<
Dejan >
I enjoyed the last one
17:03
<
cfbolz >
Dejan: Saturday I hope
17:03
<
cfbolz >
And thanks :-)
17:31
todda7 has quit [Ping timeout: 268 seconds]
17:48
yuiza has quit [Ping timeout: 265 seconds]
17:58
<
mattip >
cfbolz: "hg diff -r py3.7 rpython/rlib/rmmap.py" shows the file differ, but not in a way that would solve the translation failure
18:02
<
mattip >
tumbleweed: CPython is looking for disto maintainers to comment on this issue
18:04
<
tumbleweed >
mattip: thanks
18:20
<
tumbleweed >
yeah, I'm aware of the bigger thing going on there, but hadn't seen that BPO/PR
18:21
holdsworth_ has quit [Quit: No Ping reply in 180 seconds.]
18:21
holdsworth has joined #pypy
19:34
todda7 has joined #pypy
19:46
asmeurer_ has joined #pypy
19:50
lritter has quit [Ping timeout: 240 seconds]
20:09
todda7 has quit [Ping timeout: 260 seconds]
20:12
todda7 has joined #pypy
20:38
<
FergusL >
Good evening, I am considering to use pypy as a builtin interpreter inside a plugin for a music software. At the moment I am reading about CFFI and the "new" embedding mode. I am still unsure if pypy/cffi will allow me to do what I want: I basically want to be able to execute a process() function from a user-supplied python script
20:49
todda7 has quit [Ping timeout: 260 seconds]
20:50
todda7 has joined #pypy
21:11
todda7 has quit [Ping timeout: 240 seconds]
21:34
todda7 has joined #pypy