<mrvn>
why didn't anyone add Int32.Array and Int64.Array at the same tome?
<def>
float array was already there.
ggole has joined #ocaml
dborisog has quit [Remote host closed the connection]
dborisog has joined #ocaml
<d_bot>
<gar> I've got a question about the role of archive files. I had expected, based on the manual chapter 8 "Type-level module aliases", that I could bundle up my aliasing module (which I'll call the "ns-module"), and all the submodules (aliased modules) into and archive file, and then depend on that. But I find that does not work; if I pass the cma/cmxa for the ns-module to the compile command for a module that opens the ns-module, I get "
<d_bot>
<gar> So my question is: what is the point of the archive file? Should using the archive file work? IOW does this fail indicate that my archive file is screwed up somehow?
mxns has joined #ocaml
<d_bot>
<octachron> Archive files does not contain cmis only the cm{o,x}.
<d_bot>
<Drup> not my shot, I don't have merge rights on ocaml, but you are not going to be in 4.12
<d_bot>
<Drup> too late for that
<d_bot>
<ostera> yeah i just saw the 4.12 email and reminded me of this PR
<d_bot>
<ostera> no worries π i understand
<d_bot>
<Drup> (well, it's @octachron 's decision really, but you are far too late, by around a month I think)
<olle>
Ethos vs logos in argumentation. Appeal to authority vs appeal to logic. Can appeal to authority (in technical discussions) always be reduced to logical arguments?
<olle>
If you have a best practice suggestion, how would you reduce it to a basic set of axioms for someone asking "why?" at every step?
<d_bot>
<Drup> Amusingly, when you register for the tag "reason" on all of stackoverflow, you get lot's of questions about philosophy
<olle>
^^
<olle>
test reasonml
<olle>
or reasonreact
<d_bot>
<octachron> @ostera, this is not a bug fix, but a new feature for the internal API, so it can wait for 4.13 .
<d_bot>
<Drup> @octachron that being said, you could merge :3
<d_bot>
<octachron> Did the documentation converge?
<d_bot>
<octachron> Anyway, I will merge at the end of the week if there are no changes. (feel free to ping me)
<d_bot>
<ostera> last thing gasche said was that if trefis didn't have a concrete proposal, we'd be good with what was there
steenuil_ has joined #ocaml
steenuil has quit [Ping timeout: 246 seconds]
tane has joined #ocaml
<d_bot>
<joris> > but now, there are talks about removing lwt_engine (and lwt_glib that uses it)
<d_bot>
<joris> Another similar stuff I think is ocurl.
<d_bot>
<antron> @Drup @ulrikstrid if/when we do the "full" conversion of Lwt system calls to call libuv system calls, lwt.unix will essentially be hardcoded to assume that the Lwt_engine is the libuv one
<d_bot>
<antron> because those calls will be bypassing Lwt's Lwt_engine and submitting work directly to the libuv loop
<d_bot>
<antron> so basically once we entagle Lwt with libuv to that extent, it will be impossible to easily swap the engine like now
<d_bot>
<antron> at the moment, though, we are still working with a swappable libuv Lwt_engine
<d_bot>
<Drup> I suppose you can't extend lwt_engine enough for that ?
<d_bot>
<antron> the Lwt_engine itself could still be around, the issue is that libev, etc., dont provide implementation of all the system calls
<d_bot>
<antron> libuv is basically a loop/engine API + a system call API that uses it, so we want to use both parts of that in Lwt
<d_bot>
<antron> libev, and select, are only the loop/engine
<d_bot>
<antron> so it's not that lwt_engine needs extension, it's that we want to delete the code of e.g. Lwt_unix.openfile (mostly) and replace it with just forwarding to libuv's Luv.File.open_ or however it's bound π
<d_bot>
<wokalski> For mobile applications in particular Lwt_engine is useful for integrating with system run loops (ALooper on Android/NSRunLoop on iOS). Now Iβm not familiar with libuv at all so I donβt know whatβs possible there
<d_bot>
<antron> accoutning for any error condition differences and so on, for which we will write tests
<d_bot>
<wokalski> But with Lwt_engine the link is simple
<d_bot>
<antron> yeah once we go down this path, we will have to figure out if/how libuv loop integrations work
<d_bot>
<antron> it might even end up plugigng in or extending Lwt_engine somehow, so perhaps your question was right, Drup π
<d_bot>
<Drup> @antron the problem is that you are not going to get any visibility on how many people us lwt_glib/lwt_engine for integration; it's all in the middle of apps that are probably hidden in non-public tools and apps
<d_bot>
<antron> maybe there is some kind of freaky way to drive libuv using libev or other loops
<companion_cube>
lwt_glib is only useful if you write a gtk app with lwt, right?
<d_bot>
<antron> i am scared to even imagine it, but we will look into it
<d_bot>
<antron> we may even contribute something to libuv at some point
<d_bot>
<antron> so essentially libuv_engine
<d_bot>
<Drup> do we gain a lot by replacing Lwt_unix's code by libuv's one ?
<d_bot>
<Drup> (apart by deleting code, of course, that's always a bonus)
<companion_cube>
unless lwt_unix is much better than Unix, I'd say yes
<d_bot>
<antron> maintainability of the remaining code π
<companion_cube>
a lot of corner cases might become handled better
<d_bot>
<antron> we also gain a multithreaded I/O library
<d_bot>
<Drup> my impression was that this part was quite stable, but ok
<d_bot>
<antron> how to intelligently use it is a separate project
<d_bot>
<antron> lwt at the moment doesn't support dispatching and completing its I/O from multiple threads, but with libuv you can run e.g. a libuv loop in each of several system threads
<d_bot>
<Drup> right
<d_bot>
<Drup> I mean, if it weren't for that funky lwt loop integration, I would be all for it, but that's a major feature removal π¦
<companion_cube>
I wonder what's its overhead?
<d_bot>
<antron> we will attempt to mitigate it. i just don't want to say we can do it fully
<d_bot>
<antron> libuv's overhead?
<d_bot>
<antron> in my benchmarking of some osx file I/O, libuv and lwt had indistinguishable performance
<d_bot>
<antron> the same also as osx libdispatch and several other similar libraries
<companion_cube>
no, the overhead of allowing for plugguable event loops
<companion_cube>
antron: benchmarks could be different on linux though
<d_bot>
<antron> probably no significant overhead, the same as for dynamic dispatch of a function call
steenuil__ has joined #ocaml
<d_bot>
<antron> yes
<d_bot>
<antron> im just reporting what i've done π and i am on linux π
<companion_cube>
I imagine libuv is heavily optimized for linux
<companion_cube>
:)
<d_bot>
<antron> speaking of heavily optimized, e.g. libuv is looking into taking advantage of io_uring on linux, if they can find any benefit from it from their use case in a generic I/O library
<d_bot>
<antron> so we would get that sort of thing "for free".. this is the maintainability
<companion_cube>
:)
narimiran has quit [Ping timeout: 240 seconds]
<d_bot>
<antron> ultimatley whatever libuv does (which i know to some extent, but even regardless of that) libuv has to go through epoll/kqueue/select/whatever else exists, and thats basically the "engine" in Lwt terms. so that is abstractable. and indeed it is already internally abstracted inside libuv, because it already uses the right "engine" for each system
<d_bot>
<antron> although i did not look so closely what it does on windows
steenuil_ has quit [Ping timeout: 260 seconds]
narimiran has joined #ocaml
<d_bot>
<antron> and likewise libuv can't fully get away from doing this, because it supports submission of generic fds for polling by the "engine" with `uv_poll_t`, which is exactly what @ulrikstrid is using to plug it into lwt_engine
steenuil_ has joined #ocaml
<d_bot>
<Anurag> I'm guessing it uses IOCP on windows
<companion_cube>
antron: btw I re-read lwt.mli recently, and it's still impressive how good your odoc comments are
<d_bot>
<antron> ty π
<d_bot>
<Drup> (the ultimate proof that people who complain about docs don't actually read it)
<companion_cube>
are you thinking of the Dark guy, Drup? :)
<d_bot>
<Drup> he's not alone
steenuil__ has quit [Ping timeout: 264 seconds]
<d_bot>
<Anurag> I don't want to derail discussions too far, but i've always found it amusing that unfamiliar tooling (odoc in this case), was said to be the only reason for missing docs. Markdown won't magically fix anything, and it sure as hell won't write docs for the user π
<companion_cube>
it'd still be more familiar though
<d_bot>
<Drup> @Anurag for years, janestreet used the fact that ocamldoc was not good at handling functors as an excuse for the state of their documentation
<d_bot>
<Anurag> if odoc is confusing (and i'm sure it can be confusing if one isn't used to the syntax), nothing is stopping one from writing docs in a markdown file and sticking it into their project. Or do what dune did, and use a format that works for the project (restructured text in this case). They will lose out on features provide by odoc sure, but i find it strange that one can claim odoc is the reason people don't write docs :\
<companion_cube>
that's fari
<companion_cube>
fair
<d_bot>
<Drup> At some point, I guess they had an internal personal who knew how to write docs (or they starting caring somehow), and the doc became better, that was before odoc really became usable
<companion_cube>
rust uses markdown everywhere (including mdbooks) and it's nice, though
<d_bot>
<Anurag> For janestreet i remember a post on discuss where they mentioned that they hired a dedicated technical writer? I might be misremembering though.
<d_bot>
<Anurag> But their docs have definitely improved a lot over the past couple of years
<d_bot>
<Drup> anyway, I think outside of reason, I think less people shit arbitrarly on ocamldoc/odoc for its markup
hnOsmium0001 has joined #ocaml
<d_bot>
<joris> BTW lwt can already dispatch io to thread pool?
<d_bot>
<antron> yeah lwt automatically uses its thread pool for I/O that doesn't have a proper async system call API
<d_bot>
<joris> I know because if you try to run some lwt stuffs on 256 core servers, things go boom
<d_bot>
<antron> like the stanard example openfile
<d_bot>
<joris> Though ocaml is far from the only thing affected it is funny how everyone has been using ncpu has default for everything for decade
<d_bot>
<joris> And now it blows up big time
<sadiq>
antron: I actually have some liburing ctypes bindings in progress.
<d_bot>
<antron> sadiq: nice, i think i heard about them
<companion_cube>
Drup: depends on the libraries, still
<sadiq>
I've not put up a public repo yet because there's still some crashy.
<companion_cube>
also we still need some good datetime library, for example :/
<d_bot>
<antron> i can imagine π
<companion_cube>
and utf8 is a bit of a mess
<sadiq>
but we'll definitely do some benchmarking - there's a bare metal server with kernel 5.8 set aside for it.
<d_bot>
<joris> Did you bench it? Idk how many records io uring uses but I wonder if cstruct overhead won't annhiliate the gain of async io on fast disks
<d_bot>
<Drup> datetime is complicated
<d_bot>
<joris> This is the kind of tool that might benefit from hand written bindings
<sadiq>
I have some hand written bindings too.
<companion_cube>
complicated but unavoidable
<d_bot>
<Drup> companion_cube: honestly, I would just start from calendar and shred the API
<companion_cube>
maybe
<d_bot>
<joris> Tbh I don't understand the use case of io uring.
<d_bot>
<joris> Apart from the 0 copy part
<companion_cube>
I guess you can get better throughput?
<companion_cube>
but in OCaml, even using writev properly would probably be a boost for a lot of programs
<d_bot>
<Drup> (my submarine plan is to put darrenldl in charge of it, don't tell him yet)
<d_bot>
<joris> @companion_cube it feels to me that besides the part of 0 copy (which is very cool but does not require asynchronous) it is mostly useful to avoid using a thread pool and multiplexing cpu user time with io completion but since io stack is already async internally I feel like we are talking really really really high performance world
<companion_cube>
I think so, yes
<companion_cube>
I mean it's exciting to the rust and C++ crowds
<companion_cube>
for us I don
<companion_cube>
don't know
<d_bot>
<antron> but round trips to a thread pool can be a major bottleneck
<sadiq>
joris: you also get to amortise (or even remove entirely, at the cost of a kernel thread) syscalls
<d_bot>
<joris> Which is kind of doable with writeev but yes. And you reduce user copy as I understand. But I have hardly time believing you reach this level of perf usually
<d_bot>
<joris> That seems only useful when we are talking more than 10Gbs ios
<d_bot>
<antron> there's a libuv issue where they don't observe any benefit from using io_uring or something like that
<d_bot>
<joris> I mean it is cool to have but it does not seem game changer for most of people
<d_bot>
<antron> because the benchmarks they are using don't really have that kind of access pattern that io_uring can speed up
<d_bot>
<antron> (and they are skeptical that typical libuv usage as a generic I/O library would get a speedup from uring)
<d_bot>
<antron> but they also linked to benchmarks of specific use cases that get a major boost form uring if coded for it
<d_bot>
<joris> Don't get me wrong though I'm not saying that writing bindings and using in lwt is useless it is cool work and I guess it can have good effect
<d_bot>
<Anurag> I don't have anything running at the scale where i'd notice this, but i'm guessing io_uring will also benefit in situations where it can register/update interest over multiple entities in a single syscall (like kqueue, but unlike epoll)
<d_bot>
<joris> Yes that is one of the main point (which is part of what I included in the vague term of 0 copy)
<companion_cube>
anyway, io_uring is so last week
<companion_cube>
now you need to write your program in EBPF
<companion_cube>
entirely in kernel!
<d_bot>
<joris> Ebpf is very very cool
<companion_cube>
hey @joris, are y'all still using rust?
<d_bot>
<joris> Not really
<companion_cube>
(still no answer on whether you == enjolras but well :p)
<d_bot>
<joris> Yes
<companion_cube>
sneaaaaky
<companion_cube>
y'all should blog about why ocaml > rust :
<companion_cube>
:)
<d_bot>
<joris> We kind of figured that rust dev complexity was not really offset by performance gain for our setup
<d_bot>
<joris> And there is 'Moore law'
<d_bot>
<antron> next step: arbitrary proof-carrying code loaded directly into kernel space
<d_bot>
<joris> Nvme price getting cheaper every year, amd cpu pushing the boundaries every iteration
<d_bot>
<joris> Ocaml is fine
<orbifx>
companion_cube: I've missed most of the discussion, which of the advantages is being discussed here?
<companion_cube>
the ones that make ahrefs use ocaml instead of rust :)
<companion_cube>
hopefully they'll also, hum, test multicore
<d_bot>
<joris> Ebpf is an amazing debugging tool BTW. Look at bpftrace. Basically awl for kernel. Even without writing and compile it to kernel
<d_bot>
<antron> i guess the pre-checks on an ebpf program qualify as a kind of limited proof search
<d_bot>
<antron> then again so does any compilation process
<d_bot>
<joris> Another cool use case of ebpf I found was when I was writing a very high performance http service in rust. It was running 3 threads per core, locked to core, with SO_REUSEPORT for listen queue sharing
<d_bot>
<joris> It turns out you can attach an ebpf program to the socket to shard input connection manually instead of hashing randomly by default
<d_bot>
<joris> This way you can improve locality in application level, and perf can improve quite a lot from that
<sadiq>
(childcare interrupt)
<sadiq>
joris: the other thing for io_uring is that you actually get asynchronous disk IO finally
<sadiq>
rather than Linux's current unfortunate AIO story
<d_bot>
<joris> @sadiq you only get asynchronous io queueing no? Disk io is already heavily asynchronous
<d_bot>
<joris> Well right only write
shawnw has quit [Ping timeout: 260 seconds]
<sadiq>
disk IO at what level?
<sadiq>
joris: re: the rust http service - ooi why three threads?
<d_bot>
<joris> Empirical result π€·ββοΈ
<sadiq>
hah
<companion_cube>
from scratch?
<d_bot>
<joris> It was 2 years ago so async runtime might have improved a lot since then but back then running 3 event queue per core performed better
<sadiq>
joris: my understanding is that Linux's existing AIO stuff can block on submission
<sadiq>
so yea, that's the main advantage io_uring gets you
<sadiq>
(for disk IO)
<d_bot>
<joris> Even normal write is async. Unless the queue is full
<sadiq>
though I should point I'm by no means an expert on this area.
<d_bot>
<joris> But I guess io uring can shine on say a dB that must read several fragment of data and process them independently then combine to return result
<d_bot>
<joris> I can see how it could massively improve pefofmance in this case
<sadiq>
the io_uring work was mostly because we couldn't actually see the impact of effects+fibers on our webserver benchmark, so wondered how far we could go before we could.
<d_bot>
<joris> I'm no expert either. I'm curious to know if you get improvement
<orbifx>
we are getting to a state where we might as well imply // means https://github.com/, like twitter does for @
<d_bot>
<Anurag> @companion_cube drom + opam-bin would make for a nice cargo style workflow with fast local switches. I'll have to try out their sphinx integration to see what that looks like
<companion_cube>
fast local switches because installing OCaml wouldn't take 5 minutes? yeah
<d_bot>
<Anurag> and installing core, async etc will be much faster π
<companion_cube>
ah, that ;p
<d_bot>
<Anurag> compared to all the packages i need from there, the compiler install is pretty fast these days hehe. It might just be me forgetting what compile times were before, but i think setting up a bare 4.11 switch with the compiler is much faster than what i remember with older ocaml versions.
<companion_cube>
if you're thinking of really old versions, removing camlp4 did reduce the compilation time
dborisog has quit [Ping timeout: 264 seconds]
<d_bot>
<octachron> "Recent" version of the compiler packages also enables parallel build of the compiler
<companion_cube>
oh that's nice.
Tuplanolla has joined #ocaml
<d_bot>
<Anurag> That's probably what it is. I jumped from mostly using 4.07 to now using 4.11 exclusively
<d_bot>
<Anurag> Oh i guess I used to be on 4.06, not 4.07
<companion_cube>
that's a bit jump
Haudegen has quit [Quit: Bin weg.]
reynir has quit [Ping timeout: 240 seconds]
reynir has joined #ocaml
<d_bot>
<Drup> (for "recent" ... like 4.02 or something)
<companion_cube>
next time the poll should have "what version of OCaml did you start with" π
<d_bot>
<octachron> 4.07 for opam packages, even if parallel build of the compiler did work in 4.02 .
orbifx has quit [Quit: orbifx]
<d_bot>
<undu> started with 4.07.1, then backported the changes to a product using 4.02.3
<companion_cube>
4.02 was a great relase, but it might be time to move :D