lapinou changed the topic of #ocaml to: Discussions about the OCaml programming language | http://caml.inria.fr/ | http://www.ocaml.org | OCaml 4.01.0 announce at http://bit.ly/1851A3R | Public logs at http://tunes.org/~nef/logs/ocaml/
<Drup> whitequark: didn't someone tried already that ?
<Drup> whitequark: at least, with have an option to have proper call graph in perf now ;)
claudiuc has joined #ocaml
zpe has quit [Ping timeout: 250 seconds]
<whitequark> Drup: well, it has some DWARF info, but apparently the inlining that happened err
<whitequark> wherever it happens in ocamlopt, is not recorded anywhere
<whitequark> my plan would be to disable ocamlopt's inlining entirely and just rely on llvm, which thankfully will do the DWARF dance for us
rgrinberg has quit [Quit: Leaving.]
thomasga has joined #ocaml
shinnya has quit [Ping timeout: 258 seconds]
rgrinberg has joined #ocaml
claudiuc has quit [Remote host closed the connection]
claudiuc has joined #ocaml
zpe has joined #ocaml
claudiuc has quit [Ping timeout: 258 seconds]
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
thomasga has quit [Quit: Leaving.]
dapz has joined #ocaml
thomasga has joined #ocaml
dapz has quit [Client Quit]
sheijk has joined #ocaml
thomasga has quit [Ping timeout: 240 seconds]
jwatzman|work has quit [Quit: jwatzman|work]
maattdd has quit [Ping timeout: 250 seconds]
nikki93 has joined #ocaml
zpe has quit [Ping timeout: 258 seconds]
tlockney is now known as tlockney_away
studybot has quit [Read error: Connection reset by peer]
dapz has joined #ocaml
zpe has joined #ocaml
dsheets has quit [Ping timeout: 245 seconds]
studybot has joined #ocaml
divyanshu has joined #ocaml
zpe has quit [Ping timeout: 240 seconds]
<whitequark> urg. Cmm is full of odd undocumented stuff
<whitequark> like the Ctuple constructor which apparently represents Unit in Cmm as Ctuple [], and also reused in Mach (but only for x86) to represent operands to LEA
thomasga has joined #ocaml
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
zpe has joined #ocaml
thomasga has quit [Ping timeout: 265 seconds]
dapz has joined #ocaml
S11001001 has quit [Quit: ERC Version 5.3 (IRC client for Emacs)]
zpe has quit [Ping timeout: 252 seconds]
q66 has quit [Quit: Leaving]
sheijk has quit [Quit: .]
divyanshu has quit [Quit: Computer has gone to sleep.]
zpe has joined #ocaml
divyanshu has joined #ocaml
cantstanya has joined #ocaml
cesar_ has joined #ocaml
cesar_ is now known as Guest59599
iorivur has quit [Ping timeout: 252 seconds]
iorivur has joined #ocaml
zpe has quit [Ping timeout: 250 seconds]
nikki93 has quit [Remote host closed the connection]
yacks has joined #ocaml
<dapz> Eventually I'll get used to these double semi-colons in the repl, right?
<tautologico> yep
<tautologico> but when using IOCaml you don't need them
tlockney_away is now known as tlockney
waneck_ has quit [Remote host closed the connection]
nikki93 has joined #ocaml
nikki93 has quit [Remote host closed the connection]
waneck has joined #ocaml
zpe has joined #ocaml
ygrek has joined #ocaml
malo has quit [Quit: Leaving]
iorivur has quit [Ping timeout: 265 seconds]
jao has joined #ocaml
jao has quit [Changing host]
jao has joined #ocaml
zpe has quit [Ping timeout: 258 seconds]
michel_mno_afk is now known as michel_mno
iorivur has joined #ocaml
so has quit [Ping timeout: 264 seconds]
iorivur has quit [Ping timeout: 252 seconds]
rgrinberg has quit [Quit: Leaving.]
zpe has joined #ocaml
Guest59599 has quit [Remote host closed the connection]
axiles has joined #ocaml
yacks has quit [Read error: Connection reset by peer]
iorivur has joined #ocaml
cesar_ has joined #ocaml
cesar_ is now known as Guest67291
zpe has quit [Ping timeout: 252 seconds]
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
rgrinberg has joined #ocaml
dapz has joined #ocaml
zpe has joined #ocaml
rgrinberg has quit [Quit: Leaving.]
yacks has joined #ocaml
wwilly has joined #ocaml
<wwilly> bonjour
Guest67291 has quit [Remote host closed the connection]
zpe has quit [Ping timeout: 240 seconds]
clan has quit [Quit: clan]
divyanshu has quit [Quit: Computer has gone to sleep.]
Submarine has joined #ocaml
Submarine has quit [Changing host]
Submarine has joined #ocaml
tlockney is now known as tlockney_away
ggole has joined #ocaml
Simn has joined #ocaml
<Drup> whitequark: you should take these occasion to document this stuff.
<Drup> (I do feel like you will be sharing some work with chambart, though)
divyanshu has joined #ocaml
zpe has joined #ocaml
iorivur has quit [Ping timeout: 258 seconds]
iorivur has joined #ocaml
waneck has quit [Ping timeout: 240 seconds]
<whitequark> Drup: yeah, planned.
zpe has quit [Read error: Connection reset by peer]
zpe has joined #ocaml
<whitequark> I'm almost finished the Cmm→LLVM translation layer. I think Cmm is the weirdest transformation of SSA I've seen to date
<Drup> document .. or fix :p
<whitequark> e.g. there's Cassign, which is not... assignment
<whitequark> it appears to be rebinding of an identifier for the rest of the let scope
<whitequark> I think.
zpe has quit [Ping timeout: 252 seconds]
<whitequark> there's also Ccatch/Cexit, which is a combination of phi and unconditional jump, but it looks like raising an exception
<whitequark> my head hurts
<Drup> rebinding of an identifier ?
<Drup> like a phi function ?
<whitequark> well, sort of
<Drup> whitequark: if you are motivated enough, the same initiative than pierre chambart's applied to the lower level would probably be extremely valuable
<whitequark> so you have "let x in y" and "z := t". "let x in y" corresponds to Hashtbl.add env x y. "z := t" corresponds Hashtbl.replace z t
<whitequark> it's like a subset of all possible phi functions for a subset of all possible control flow
<Drup> =')
<whitequark> ಠ_ಠ
<Drup> weird indeed
<ggole> You don't usually need phis in a pure higher order language, because they become arguments
waneck has joined #ocaml
<ggole> But refs aren't pure: if you want to model them more accurately than just memory, you might use a hack like that.
clan has joined #ocaml
<whitequark> ggole: but it's not modelling refs!
<Drup> ggole: what about matches and if/then/else ?
<ggole> whitequark: hmm O_o
<Drup> it's not for loop, but in my mind, it would still be a phy function in the output basic bloc
<Drup> phi*
<ggole> Drup: well, usually they turn into args too (if you use CPS)
<Drup> I'm not sure ocaml's compiler is in cps
<whitequark> ggole: Drup: there are two places Cassign is generated.
<Drup> (don't know at all, actually)
<whitequark> one is the for construct. the other is translating Psetfield/Poffsetref, whatever that is
<Drup> :p
* ggole goes to look
<whitequark> but in for, it's definitely not modelling a cell, I'm pretty sure it's an alternative representation of a phi
<ggole> cmm is in asmcomp, right?
<whitequark> yes. for is in cmmgen.ml. but it's boring
<whitequark> the setfield thing is in bytecomp/simplif.ml:40
<Drup> it's all over the place x)
<whitequark> well, it's two levels higher than the place where Cassign will be generated. but it's the only other precursor
<whitequark> the function is called "eliminate_ref"... so I guess it does the SSA transform? mem2reg
<ggole> That's what I was suggesting with the ref thing
<ggole> Not sure though
<whitequark> lemme check with real code
<ggole> Nonescaping refs seem to turn into assign
Submarine has quit [Remote host closed the connection]
<whitequark> yep
<ggole> let f1 x = x := 1 -> Cstore and let f2 x = let z = ref x in z := !z + 1; !z -> Cassign
<ggole> So I'd guess that you can translate that as an llvm alloc?
<ggole> And let mem2reg eat it
<whitequark> I guess so. translating it properly to phi nodes would be painful and probably pointless
AltGr has joined #ocaml
<ggole> It might be more interesting to translate nested functions as jumps+phis
<ggole> (Not sure how you handle non-tail calls with that approach though.)
<whitequark> eh, that's what LLVM's inlining is for
<whitequark> (also, at Cmm layer there are no more "nested" functions. I think they last exist at clambda)
<ggole> Ah :(
<whitequark> it's not really a problem anyway
<ggole> The assign thing doesn't seem to happen for other mutable fields though
<ggole> I think Cmm is low enough that you are going to miss some opportunities.
<whitequark> I don't know. Cmm is at approximately the same level than LLVM IR itself
<whitequark> translating from ulambda would be much more complex
<ggole> I suspect you are going to miss out on things like SROA
arj has joined #ocaml
<whitequark> yes, unboxing is generally done in Cmmgen
<whitequark> although if I find a way to localize nonescaping allocations, I think LLVM's SROA will manage to work
<ggole> It would be cleaner to try and use the existing escape analysis, but that might be too much work :(
<ggole> Although you could just duplicate it at the Cmm level, and turn Cmm allocs into, er, LLVM allocs
<whitequark> well, no, I have to turn Calloc into proper alloc code. minor alloc fallbacking to major alloc
<ggole> Why do you have to do that if they don't escape?
<whitequark> mmh. fair enough.
<whitequark> I wonder if there's some way to tell LLVM "hey, this piece allocates a chunk of memory"
<ggole> It might be annoying to have to specify that, say, a call to caml_equal is not escaping.
<whitequark> no, there's an LLVM attribute for that. noalias
<whitequark> errrr nocapture
<ggole> Surely its *you* that has to do the escape analysis? Since you're going to be translating Cmm Callocs into either llvm alloc or some heap thing?
<ggole> Of course the right approach is to get it working first and worry about this stuff afterwards.
<Drup> ggole: get it seglfault first* :D
<whitequark> ggole: seems like I could teach OCaml's MemoryBuiltins analysis about caml_alloc
* ggole nods
<whitequark> it would require some patches to upstream though
<ggole> Is this what gloms separate allocations together?
<whitequark> `gloms' ? :D
<whitequark> well, not quite, it provides the information for transformations
<ggole> Nevermind, that appears to be comballoc.ml
<ggole> Right.
<whitequark> errrr typoed
<whitequark> LLVM's MemoryBuiltins analysis
<ggole> Ah.
* whitequark grumbles
<whitequark> teaching LLVM about OCaml's bump pointer allocator will prove difficult. mainly because you can only mark the result of a function as noalias, not of... a load
<whitequark> it's really not suited for the case where your allocation routine is partially inlined at all. and it runs inliner before vectorizer.
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<ggole> Mmm, looks like comballoc is lower than cmm
<whitequark> it is. but it just combines allocations, not lifts them
iorivur has quit [Ping timeout: 276 seconds]
<ggole> If you just need a handy spot to stick the noalias annotation, would a tiny wrapper func, small enough to always be inlined, work?
<whitequark> see above: it runs inliner before vectorizer. so, noalias would be immediately lost
<ggole> I admit I don't see how the vectorizer applies to this situation
dapz has joined #ocaml
<whitequark> vectorizer likes noalias because you can't e.g. expect SIMD ops to have the same effect as regular ops if source and dest are same
<whitequark> it's really why FORTRAN is still used for everything numeric, it has `restrict' (= noalias) on every array implicitly
maattdd has joined #ocaml
<whitequark> (noalias) I think it's possible to abuse the TBAA machinery to get that result, i.e. that's how Rust says that their linear pointers never alias each other
<whitequark> it's horrible though
<whitequark> oooh, idea: use ppx attributes on loops to set LLVM metadata like "unroll N times" :)
<Drup> (that would be really cool)
<Drup> (with other stuff, like "ensure TCO")
<Axman6> ppx?
<ggole> Neat idea.
<whitequark> Drup: I already enforce same TCO semantics as in OCaml
<whitequark> (obviously, it wouldn't work otherwise)
<Drup> whitequark: is TCO done before ?
<whitequark> hm? no, I just teach the ocamlcc in LLVM to do TCO and then call all ocamlcc calls as tail calls
<Drup> ok
<whitequark> LLVM will then translate those actually in tail position into actual tail calls
<whitequark> including those that become tail calls after optimizations--hence marking every call as tail
<Drup> I though the issue will TCO in Rust was global to LLVM, not just rust
<ggole> LLVM tail calls have some restrictions, right?
<whitequark> no. the TCO issue in Rust stems from the fact that it wants to use C calling convention
<whitequark> ggole: yes. normally, you have to use fastcc convention to get TCO, whose semantics are unspecified
<whitequark> LLVM automatically upgrades internal functions to fastcc and puts the tailcall annotation on them, but such set could be quite limited
<whitequark> plus it's not guaranteed
<ggole> "On x86-64 when generating GOT/PIC code only module-local calls (visibility = hidden or protected) are supported."
<ggole> That could be a problem
<ggole> (If that info is up-to-date.)
<whitequark> where's that from?
<whitequark> well... means I have to create a "forwarder" for every function exported from .cmxs
<whitequark> hm, no, won't work if Module1.a and Module2.b tailcall each other
<whitequark> chances are that it is possible to fix it within ocamlcc, although it will probably be moderately painful
<ggole> Maybe ask the LLVM peeps how to approach such a difficulty
<whitequark> ggole: no, it actually will work: http://llvm.org/docs/CodeGenerator.html#sibling-call-optimization
<whitequark> Module1.a/Module2.b is "sibling call", I believe
<whitequark> and Module1.a recursing would either call its own wrapper (→ inlined), or something else from that module (→ hidden)
tautologico has quit [Quit: Connection closed for inactivity]
<ggole> Hmm.
<whitequark> the GOT/PIC restriction is still odd. I need to investigate what it really means.
<whitequark> also, wtf, no ARM support? that's a little hard to believe.
<whitequark> no, of course it has ARM support. outdated doc.
* whitequark idly wonders if janestreet or someone like would be interested in this project. he is not happy thinking about the amount of bureaucracy required for merging all that stuff upstream
<whitequark> (and by upstream I mean mainly LLVM. it can be a bit... lengthy.)
<adrien_oww> heh
<whitequark> they do seem to be interested in supporting more diverse set of languages, so there's that. quite a few people are not happy with LLVM being "a clang backend"
<whitequark> which leads to code in LLVM pattern-matching against whatever clang generates
<ggole> Yeah, LLVM is clearly not designed for GCed higher order langs
iorivur has joined #ocaml
<ggole> But maybe it can adapt a bit to accommodate them
<whitequark> well, it's much more suitable than what some people say :p but requirs a fair amount of hackage, definitely.
<whitequark> that being said, it requires a fair amount of hackage even for Rust or C++
nikki93 has joined #ocaml
iorivur_ has joined #ocaml
iorivur has quit [Ping timeout: 245 seconds]
arj has quit [Remote host closed the connection]
claudiuc has joined #ocaml
rand000 has joined #ocaml
<gasche> whitequark: if you're serious about being interested in funding, that's a question worth asking
<gasche> (I mean if you do consider the possibility of turning that into a time-limited full-time job)
<whitequark> gasche: that's definitely a possibility.
<whitequark> (but I'd like to have a prototype on hands first :p)
<gasche> do you have the reasonable professionnal credentials to make money-people happy about funding you, or are you the weird autodidact that requires introduction letters of the form "I know this guy from IRC and he does look like he knows stuff about LLVM"?
<whitequark> gasche: "official maintainer of OCaml LLVM bindings"
<gasche> that's a good start
<whitequark> well, and OSS code they could look at
iorivur_ has quit [Ping timeout: 252 seconds]
<gasche> if I were you, I would either send out a request for funding on the caml-list, or contact janestreet, ocamlpro and ocamllabs directly
<whitequark> yup, that was my plan.
<gasche> if I were a potential funder, I would wonder whether I really believe that, with the current limitations of LLVM wrt. OCaml's runtime model, you can produce a competitive backend
<whitequark> this is why I prefer to come with a prototype and some preliminary benchmarks.
<gasche> (because probably the world doesn't really need another "faster than ocamlc, slower than ocamlopt" unmaintained implementation)
<gasche> but the problem is that the benchmarks of your prototype will not look good enough
<gasche> (or if they do, it means you're done solving the hard problem and excited contributors will do the rest, so why would I fund you?)
<whitequark> ha.
<gasche> (to hire talent, maybe?)
<gasche> I'd personally bet on "they will not look good enough"
<whitequark> I really wonder why Z3 underperforms so badly.
<whitequark> my bet is that they start from bytecode and do not benefit from unboxing code, which vanilla LLVM is too dumb to add itself
<whitequark> gasche: (done solving the hard problem) two points though. one hard part is writing all the C++ for LLVM. I think, though not sure, that the intersection of (knows OCaml internals, knows LLVM internals, is excited contributor) is vanishingly small here
<whitequark> second hard part is pushing all the changes to the relevant upstreams, which is as mortifyingly boring as it ever gets
<whitequark> getting it to run quickly is, from my perspective, not that hard in comparison.
<gasche> good point
<gasche> (in particular the upstream one)
<gasche> (but then "getting stuff upstream" is mostly about waiting and being patient, so it's rather contracting than a full-time thing)
maattdd has quit [Ping timeout: 250 seconds]
<gasche> well I do hope you ask for funding and get some
<whitequark> yeah, thanks!
<whitequark> (waiting) it's also about writing C++ well enough that it's accepted into LLVM codebase. I've seen way too much cool forks which are abandoned because passing code review would be inconceivable
maattdd has joined #ocaml
<companion_cube> "upstream" must be a really mean person ^^
Kakadu has joined #ocaml
<whitequark> Cmmparse is in such a sorry state ;_;
<companion_cube> gasche: out of curiosity, do you know whether metaOcaml can really have an impact on performance, on real programs?
nikki93 has quit [Remote host closed the connection]
nikki93 has joined #ocaml
<ggole> Z3? The SAT solver? O_o
<whitequark> ggole: no, camlllvm: https://github.com/raph-amiard/CamllVM
<whitequark> used to be called Z3.
<ggole> Oh
<ggole> Oh, that old byte-code->llvm thing
<gasche> companion_cube: not sure what the question is
<gasche> but there was this work on implementing matrix-multiplication in metaOCaml with "compile-time" tuning based on the parameters
<gasche> and that had nice speedups
<companion_cube> I meant: instead of trying to use llvm to make things even faster, would merging metaOcaml (I know, no chance) make building faster programs easier?
<gasche> (the same kind of stuff done by fftw, only with metaocaml)
<gasche> not sure why you say "no chance"
<companion_cube> small chance, ok
<gasche> I think that LLVM and MetaOCaml are orthogonal
<whitequark> companion_cube: metaocaml requires you to modify your code
<companion_cube> yes, but that can be good too
<gasche> LLVM is about possibly getting better backend support, a few minor optimizations and hopefully faster usage of cool new instruction set features
<companion_cube> if you are able to modify the bottleneck
<gasche> (eg. SIMD for numeric code)
<companion_cube> sure
<companion_cube> although you'd need unboxed floats, I guess
<gasche> MetaOCaml will let you write certain kind of program faster, but it's more about high-level code tuning
<companion_cube> yes yes
<companion_cube> the opposition I made is because human efforts are scarse
<gasche> but whitequark is not interesting in working on MetaOCaml, nor Oleg on LLVM
<companion_cube> scarce*
<gasche> so there is no competition of resource here, except maybe maintainers' attention
<companion_cube> I'm not trying to change whitequark's mind :)
<gasche> s/interesting/interested/
<whitequark> it's really just about solving different problems. ocaml-llvm is ideally a drop-in replacement
<gasche> I personally suspect that LLVM may not be very good at optimizing code for, say, Coq, which is the problem domain where OCaml shines
<gasche> (because it's mostly memory accesses and data representation choices)
<gasche> but it could be very interesting for numeric programs, and some people write that in OCaml as well
rand000 has quit [Ping timeout: 258 seconds]
<whitequark> sounds about right
<whitequark> don't forget the fact that you will get the ability to inline C stubs directly into OCaml code
<whitequark> (although that would again require some custom machinery due to the way GC roots are handled in C. oh well.)
zpe has joined #ocaml
<gasche> (my point above is also the reason why I believe that an LLVM backend and Pierre Chambart's high-level optimization passes are completely independent)
<companion_cube> both would be very nice to have
nikki93 has quit [Remote host closed the connection]
<gasche> (plus I'm not sure an LLVM backend would actually improve performances, while I'm positive that Pierre's work will -- though not necessarily by much in the common case)
<companion_cube> \o/
<companion_cube> well, inlining or unrolling the right spot can have a big effect, I think
<companion_cube> e.g. Hashtbl.find is hand-unrolled
<gasche> well
<whitequark> or SIMD. or PGO (though that's not very advanced right now)
<whitequark> or, say, algebraically equivalent FP transformations (-ffast-math)
<gasche> Pierre implemented reasonable inlining in his branch, and the results don't appear to be world-shattering
<gasche> surely that can be improved, but don't expect that stuff will magically get twice faster
<whitequark> gasche: I suspect it is because inlining shines mostly when it exposes more info to some downstream optimizations, and ocaml has next to none.
<whitequark> e.g. think of Array.map2 (*.) re im
<whitequark> if you inline, it gets slightly faster. if you inline and SIMD, it gets *much* faster.
divyanshu has quit [Quit: Computer has gone to sleep.]
<gasche> we'll see
<whitequark> or, whe inlining allows to unbox allocations in called functions (although OCaml may do it decently well)
* whitequark nods
<companion_cube> also I have some hopes that Pierre can remove some local allocations
<gasche> but I would recommend flegmatic patience rather than excited grand hopes
<gasche> companion_cube: but local allocations are fast enough today that you won't see much difference for many workflows
* companion_cube is flegmatic
<companion_cube> (maybe)
<companion_cube> gasche: probably, yes
divyanshu has joined #ocaml
thomasga has joined #ocaml
thomasga has quit [Client Quit]
thomasga has joined #ocaml
<ggole> Dunno, I can imagine numbering and coalescing of structured values making a nice difference
<ggole> Although probably mostly for numeric code
avsm has joined #ocaml
<jpdeplaix> gasche: if whitequark finds out a want to not have performances loss between ocamlopt and llvm, the good thing would be to replace the old backend by llvm. It would allow to have an extreme code factorization
<Drup> gasche: chambart's branch is not yet faster ;)
<Drup> (the important being "not yet")
<Drup> whitequark: in theory, would it allow to link against other llvm-compatible languages ?
<Drup> (thanks for the documentation, btw)
<whitequark> Drup: in practice, I want to LTO OCaml and C code together
<Drup> whitequark: what about C++ ? :D
<whitequark> Drup: no problems, just use clang
<whitequark> I mean, you can link them together right now
<whitequark> there just will be no cross-TU optimizations
<gasche> Drup: where does the information come from? did you benchmark it recently?
<Drup> gasche: yes
<Drup> gasche: with lilis
<gasche> and the results were?
<gasche> (which version did you use? the review branch?)
<Drup> the review branch, yes
<gasche> did you write about this somewhere?
<Drup> almost no difference, around 1%
<Drup> yes, I did, directly to hnrgrgr
<Drup> (and he passed the information)
<gasche> hm
<jpdeplaix> s/want/way/
<gasche> (1) why not write about this publicly? (2) weird choice to send an email to *not* the author :p
<Drup> ( whitequark : the use case I have in mind is writing llvm passes in ocaml, obviously :p)
<whitequark> Drup: no, my work doesn't make that any easier than it is now
<Drup> gasche: 1) because hnrgrgr pmed me about it 2) because hnrgrgr pmed me about it .. through irc
<gasche> you still have time for a public writeup
<gasche> you should
<whitequark> writing simple LLVM passes in OCaml may be made possible with relatively little effort. using (and even moreso defining) analysis info definitely requires you to drop down to C++.
<Drup> gasche: hnrgrgr/chambart asked me to wait until a public anouncement
thomasga has quit [Quit: Leaving.]
<Drup> there are still bugs :p
Thooms has joined #ocaml
<mrvn> The economics up bugs: There comes a point in time when software is good enough for people to use it and buggy enough for people to pay for updates.
<mrvn> s/up/of/
<gasche> well
<gasche> the ocaml backend does not really work that way :p
<Drup> gasche: according to hnrgrgr, this version of the backend is not supposed to be very optimizing, it will come in another set of patches :)
<gasche> I spent time reviewing the existing code
<gasche> I would be interested in information about how it does
<gasche> also "exactly the same speed" is good in the sense that there is no regression
<gasche> I won't recommend that you do against hnrgrgr's advice, but next time, please write a f*ing blog post
<nicoo> whitequark: What is “PGO” ?
<adrien_oww> profile-guided optimization
Arsenik has joined #ocaml
<Drup> gasche: indeed, that's what I reported, that there was no regression
nikki93 has joined #ocaml
<Drup> I don't even have a blog, you now :D
<gasche> in case anyone is interested (or in a reddit-submitting mood), a glorious post about ('a,'b,'c,'d,'e,'f) format6: http://gallium.inria.fr/blog/format6/
tobiasBora has joined #ocaml
maattdd has quit [Quit: WeeChat 0.4.3]
<Drup> (and I don't have a taste for writing up, unlike you)
maattdd has joined #ocaml
<nicoo> adrien_oww: Ah, ok
<adrien_oww> Drup: it's an acquired taste ;p
<adrien_oww> nicoo: imho, pgo is not good
<ggole> "The main get-away"?
<gasche> "I benched lillis today, here's how to reproduce, here are the results ...% ...%, look ma, no regression!"
<adrien_oww> unless you get lots of data
<adrien_oww> and that's not trivial
<nicoo> gasche: I will procrast^W read that :)
<gasche> ggole: apparently this word doesn't exist
<gasche> it's a shame because the kind reviewer and myself thought it did
<gasche> (rather: doesn't have the meaning it should have)
<ggole> Perhaps it should be "takeaway", ie, the central insight that you derive from reading/watching something
<gasche> ah, yeah, thanks
avsm has quit [Quit: Leaving.]
<gasche> which is different from take-out, meaning fat and rice in a box
<ggole> Getaway exists, but has a very different connotation
<ggole> If you were to rob a bank, the vehicle you escaped in could be described in as a getaway car.
<ggole> (Silly English idioms.)
<gasche> fixed, thanks
<gasche> (I only remove the hypen of composed words I'm familiar with, so this will be "take-away" for now)
nikki93 has quit [Ping timeout: 245 seconds]
iorivur has joined #ocaml
<ggole> Hmm, I didn't realise that some of the format6 params were for scanning
caligula_ has joined #ocaml
<whitequark> wow, ten-parameter GADT
nikki93 has joined #ocaml
<ggole> Another minor niggle, but-last would usually be "second last" or "last-but-one".
<ggole> But an English reader would only find it odd, not confusing.
<whitequark> gasche: "hence the undetermined variable 'a in the format type above."
<whitequark> perhaps 'f ?
<nicoo> gasche: Seeing as format_of_string is %identity with a type constraint, I have no idea about the bonus question
<companion_cube> Drup: I'm confident you can write on gagallium
<companion_cube> or you can ask for some place on someone else's blog ;)
<ggole> format_of_string seems to produce weakly polymorphic type vars
caligula has quit [Ping timeout: 276 seconds]
<nicoo> Ah, yes, value restriction :(
<whitequark> gasche: "'c' is the return type of the user-defined printers accepted by the function (stringforsprintf,unit` for most others)." has the formatting broken.
clan has quit [Quit: clan]
jao has quit [Ping timeout: 250 seconds]
tobiasBora has quit [Ping timeout: 246 seconds]
<ggole> gasche: good article, if a bit confusing because of the subject matter.
nikki93 has quit [Ping timeout: 250 seconds]
<whitequark> gasche: I concur. excellent article. I finally undersand how format6 works and it didn't take a lot of headscratching.
nikki93 has joined #ocaml
Simn has quit [Quit: Leaving]
nikki93 has quit [Ping timeout: 258 seconds]
<gasche> thanks for the typo/remarks, I'll fix all that
avsm has joined #ocaml
darkf_ has joined #ocaml
nikki93 has joined #ocaml
ikaros has joined #ocaml
darkf has quit [Ping timeout: 258 seconds]
ygrek has quit [Ping timeout: 258 seconds]
nikki93 has quit [Ping timeout: 240 seconds]
avsm has quit [Ping timeout: 258 seconds]
darkf_ is now known as darkf
thomasga has joined #ocaml
nikki93 has joined #ocaml
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
thomasga has quit [Ping timeout: 265 seconds]
nikki93 has quit [Ping timeout: 245 seconds]
nikki93 has joined #ocaml
nikki93 has quit [Ping timeout: 250 seconds]
* whitequark has just realized he automatically types instruction name as "lookupswitch" instead of "switch"
<whitequark> "lookupswitch" is the instruction of Flash AVM2 bytecode ;_;
<whitequark> a more appropriate name would be Flash "unspeakable horror of doom" bytecode.
<NoNNaN> whitequark: take a look at php bytecode and typing, it lot better!
<Drup> """"typing""""
<whitequark> NoNNaN: php has bytecode?
<whitequark> wait, I don't want to know
<def-lkb> :]
<ggole> More like bitecode amirite
tobiasBora has joined #ocaml
<NoNNaN> whitequark: yes, it has (mostly runtime only), and has an almost type specialized bytecode execution engine
<whitequark> am I correct in remembering that PHP doesn't have a GC?
<whitequark> i.e. everything is freed when the request finishes
<def-lkb> no longer, it has a GC, with various race conditions
<NoNNaN> whitequark: it has, a refcount one, improved a bit in the last few years
<whitequark> ah, right, it just didn't collect lambdas. their implementation of lambdas is... I don't even have words for that
<NoNNaN> whitequark: facebook has a mostly typed version of the language, called "hack", their parser, typing engine written in ocaml, and also has a web based gui for development (js_of_ocaml)
<whitequark> yeah, read about it in news. also, HHVM, isn't it?
<whitequark> (lambdas) someone could conceivably write a sarcastic essay "how would PHP implement lambdas" and that would still have better design
<NoNNaN> whitequark: the execution engine is hhvm, they also want to port the jit to llvm
avsm has joined #ocaml
<whitequark> ... and then put it in the browser with emscripten?
<whitequark> emscripten compiled with emscripten, motivating vendors to put over 16GB of RAM into laptops!
nikki93 has joined #ocaml
<mrvn> when I call 'callback_exn(fn, Val_unit);' do I have to put fn into a local root so the GC doesn't delete it before it comes back? Or does the closure keep itself alive?
avsm has quit [Ping timeout: 265 seconds]
<NoNNaN> whitequark: they had a large codebase (probably 5+ millions of lines of code), and the web ide server side component calculate the code completions...
<whitequark> mrvn: CAMLexport value caml_callbackN_exn(value closure, int narg, value args[]) { CAMLparam1 (closure);
<mrvn> whitequark: should have checked that. thx.
<whitequark> NoNNaN: well, that's not bad at all. on desktop you'd use ctags
<whitequark> mrvn: no problem, I have ocaml source on hotkeys :]
<mrvn> Any reason then why Thread.t contains the closure itself?
<whitequark> perhaps the OS thread start function is passed Thread.t instead of closure?
<whitequark> say, to maintain other values inside Thread.t
<whitequark> local data, etc?
cdidd has quit [Quit: Leaving]
<mrvn> nope.
<whitequark> odd
cdidd has joined #ocaml
nikki93 has quit [Ping timeout: 240 seconds]
<mrvn> struct thread has a pointer to the Thread.t. Haven't seen anything adding the Thread.t to a local root yet.
<mrvn> which is odd. I would think the GC would free the Thread.t and make the C code crash.
<whitequark> maybe the scheduler has the pointer to it?
<mrvn> whitequark: native code has no scheduler.
<whitequark> oh. right.
<whitequark> only the mutex
studybot has quit [Remote host closed the connection]
<mrvn> remind me. How does compaction work? Does the GC allocate a new heap and copy everything there or does it move stuff around in the old one?
yacks has quit [Ping timeout: 240 seconds]
rand000 has joined #ocaml
* whitequark doesn't know
Thooms has quit [Quit: WeeChat 0.3.8]
<mrvn> The later would limit ocaml to about half the ram.
thomasga has joined #ocaml
<whitequark> compact.c:319 suggests it moves stuff in the old heap
<whitequark> see also :402
divyanshu has quit [Quit: Computer has gone to sleep.]
divyanshu has joined #ocaml
thomasga1 has joined #ocaml
divyanshu has quit [Client Quit]
<ggole> /* Fourth pass: reallocate and move objects. Use the exact same allocation algorithm as pass 3. */
thomasga has quit [Read error: Connection reset by peer]
avsm has joined #ocaml
<nicoo> whitequark: IIRC, various implementations of PHP have bytecode (which isn't user-facing), which is kept in cache so as not to re-parse PHP files every time. Yes, it sounds ... strange
<nicoo> Of course, HHVM has its own byecode + JIT, but the bytecode/JIT layer is actually done kinda-right (type guards everywhere prevent the most common vulnerabilities found in vanilla PHP over the years)
<NoNNaN> nicoo: stateless execution model, it has some properties that otherwise not possible, eg.: Efficient patch-based auditing for web application vulnerabilities -> http://people.csail.mit.edu/nickolai/papers/kim-poirot.pdf
<whitequark> nicoo: iirc, HHVM isn't really PHP
<whitequark> it's somewhere between "a subset" and "we have similar syntax"
<nicoo> whitequark: HHVM has a PHP-frontend (AFAIK, it isn't yet 100% compliant, but it is the developper's goal). It also has a Hack frontend, which is a language that looks like PHP, with typing
q66 has joined #ocaml
q66 has quit [Changing host]
q66 has joined #ocaml
<whitequark> I see
lostcuaz has joined #ocaml
rand000 has quit [Quit: leaving]
<NoNNaN> whitequark: take a look at this, for ocaml native parts: Towards Optimization-Safe Systems: Analyzing the Impact of Undefined Behavior -> http://people.csail.mit.edu/nickolai/papers/wang-stack.pdf
<NoNNaN> implemented as custom llvm passes
<whitequark> NoNNaN: hm, I don't quite get how it's related
yacks has joined #ocaml
ikaros has quit [Quit: Ex-Chat]
Submarine has joined #ocaml
Submarine has quit [Changing host]
Submarine has joined #ocaml
pminten has joined #ocaml
<NoNNaN> whitequark: it reveal the "optimized out" parts of the native code, due undefined behaviour (may not as related as I think)
<whitequark> NoNNaN: well... it's in C++. if you were just directing me to description of UB in general, yeah, I'm aware
<whitequark> it's probably not possible to successfully use LLVM without a good understanding of UB
<gasche> the ocaml runtime is currently compiled in -O0, I think, to avoid any joke
<gasche> maybe -O1
<whitequark> gasche: ?!!
<gasche> you can check, I think it's -O1
<whitequark> WTF
* whitequark adds to TODO list: check that ocaml runtime doesn't exhibit UB
<whitequark> thanks to clang and ubsan, it's trivial these days
<gasche> remember that it was written at a time where you had to implement 64-bit operation emulation by yourself
thomasga1 has quit [Quit: Leaving.]
<whitequark> well... even using -O1 means giving up any correctness. clang will happily mutilate your code at -O1.
<gasche> John Regher did a bit of checking of arithmetic overflow, and there were a couple cases
<whitequark> gcc from what I heard can be even more eager to do so, but I never looked close at it.
* whitequark really enjoys John Regehr's articles.
thomasga has joined #ocaml
lostcuaz has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
divyanshu has joined #ocaml
<adrien_oww> 13:54 < gasche> the ocaml runtime is currently compiled in -O0, I think, to avoid any joke
<adrien_oww> gasche: I'd like to change that actually
<adrien_oww> removing support for elder (not even "older") compilers will most probably help
<whitequark> adrien_oww: first step: compile with CC='clang -fsanitize-undefined' :)
<adrien_oww> but I believe we can safely assume that a gcc 4's -O2 is fairly safe
<adrien_oww> whitequark: or gcc instead ;-)
<companion_cube> can't we compile the runtime with compcert?
<companion_cube> it has provably safe optimizations...
<whitequark> companion_cube: but optimizing out UB is legal.
<adrien_oww> companion_cube: patch welcome!
<companion_cube> UB ?
<adrien_oww> undefined behaviour
<whitequark> undefined behavior.
<companion_cube> ahh
<adrien_oww> companion_cube: it's too late btw, you've agreed to do these patches
<adrien_oww> no "but"!
<companion_cube> adrien_oww: isn't it just CC=compcert? :o
<adrien_oww> companion_cube: do it in the ./configure
<adrien_oww> not a lot of work
<companion_cube> wat
<adrien_oww> detect compert in the configure
<adrien_oww> and use appropriate optimizations
<adrien_oww> actually, if you don't do it, I'll give it a try
<companion_cube> \o/
<adrien_oww> nicoo's siphash patches will enjoy -O2 a lot
nikki93 has joined #ocaml
<companion_cube> I can't even compile the html doc, so...
<adrien_oww> and I expect other things will do too
<NoNNaN> adrien_oww: take a look to the paper, that I linked, solver based code is also available: http://css.csail.mit.edu/stack/
<whitequark> adrien_oww: or you could -O3 :)
wolfnn has joined #ocaml
<whitequark> in modern compilers, -O3 differs from -O2 in that -O3 enables vectorizer
<whitequark> (don't forget to set -march)
<whitequark> er, -mcpu
<adrien_oww> whitequark: tests will show whether that brings any performance improvements ;-)
<adrien_oww> (but I doubt it)
<whitequark> perhaps on things like copying loops, if they somehow don't use libc
<whitequark> otherwise it is unlikely
lostcuaz has joined #ocaml
<adrien_oww> NoNNaN: I'm aware of it
<adrien_oww> (btw, I don't expect much UB in the C in OCaml)
<adrien_oww> (afk)
nikki93 has quit [Ping timeout: 252 seconds]
<nicoo> whitequark: I (jokingly) asked for -O2 -ftree-vectorize for SipHash :]
* nicoo still needs to (properly) benchmark Hashtbl :(
nikki93 has joined #ocaml
<companion_cube> does anyone know how to compile the html doc of the stdlib?
<companion_cube> I can't find the makefile target
nikki93 has quit [Ping timeout: 258 seconds]
Cypi has quit [Quit: Lost terminal]
<adrien_oww> companion_cube: when you find it, let me know; I need to know for the Makefile.nt merge
<adrien_oww> and jonathan protzenko maybe knows (but I think I already mentionned that)
<companion_cube> it's not the day Gallium people are in paris though :/
<adrien_oww> send an email :P
<adrien_oww> maybe to dd
<companion_cube> I just asked him, he doesn't know.
Cypi has joined #ocaml
<adrien_oww> dd?
<companion_cube> Damien, I suppose
shinnya has joined #ocaml
avsm has quit [Quit: Leaving.]
<adrien_oww> yes, I'm the one who wrote that :P
<adrien_oww> but I meant: whom did you ask this to?
<companion_cube> dd
<whitequark> dd if=/dev/companion_cube
<gasche> companion_cube: the documentation of the standard library is built as part of the OCaml Manual
<gasche> you will find it under a separate SVN repository (or git mirror)
lostcuaz has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<gasche> (I don't think there is anything Windows-specific in its build system; maybe nobody ever built it from Windows)
<gasche> nicoo: well couldn't this one file be compiled with the relevant optimizations?
thomasga has quit [Quit: Leaving.]
<nicoo> gasche: I didn't dare touch the build system just for this (esp. since manually fixing some GCC misoptimization already yields decent performance)
<nicoo> My main concern is that I cannot find good methodology on benchmarking hashtables :( I guess I'll give a shout on caml-list to see if someone has a good reference, is motivated to write a benchmark or has applications I could easily use as benchmarks.
<companion_cube> gasche: I just want the API of the stdlib
<gasche> companion_cube: read .mli or .cmti files?
<companion_cube> nicoo: take applications that use Hashtbls a lot, and run them?
<companion_cube> gasche: .mli
<companion_cube> no, I want the actual html -_-
<gasche> otherwise get the manual repo and "make library"
<companion_cube> if only to check that the annotations are correct
<companion_cube> gasche: I've added a simple "make html_doc" target to the compiler repo ;)
<ousado> nicoo: google once took a few hashes/hashtable implementations and made a benchmark, maybe you can find and adopt it
<adrien_oww> gasche, nicoo : there is way to know if it is safe to use -O2 for a specific file for a given C compiler
nikki93 has joined #ocaml
<adrien_oww> gasche: so, html doc and manpages are not made from the same repository?
<adrien_oww> gasche, nicoo : s/there is way/there is _no_ way/
michel_mno is now known as michel_mno_afk
<mrvn> adrien_oww: -O2 is supposed to always be safe.
<mrvn> adrien_oww: and probably compiler specific
<whitequark> mrvn: I don't think you can make a sweeping statement like that about all C compilers.
<whitequark> there is no standardization whatsoever over -O meaning
<mrvn> true.
<adrien_oww> mrvn: on yesterday I removed (locally) support for IRIX 5 in ./configure
<whitequark> and I don't believe "-O2 is supposed to always be safe" is actually a semantic provided by any compiler vendor.
<companion_cube> adrien_oww: I just added a target for this :>
<mrvn> But anything that aims to replace gcc will have it.
<adrien_oww> along with openbsd on m88k
<adrien_oww> mrvn: and gcc 2.9x
<whitequark> I mean, what *is* "safe"?
<adrien_oww> it's supposed to be safe but there are many compilers and compiler versions ;-)
<whitequark> I find it hard to believe anyone in their right mind would knowingly ship -O3 which miscompiles.
<mrvn> The far bigger problem, is source that is invalid and assumes, knowingly or not, specific compiler behaviour.
michel_mno_afk is now known as michel_mno
<whitequark> otherwise, you're not guaranteed to not have bugs in either -O2 or -O3 (or -O1 or -O0.)
<nicoo> adrien_oww: There is a way: assuming the compiler to be correct, -O2 is safe :>
<mrvn> whitequark: -O3 in gcc is known to be wrong sometimes
nikki93 has quit [Ping timeout: 252 seconds]
<whitequark> mrvn: miscompilations not fixed in a maintained branch of gcc?
<Drup> just use compcert :S
<Drup> :D*
<companion_cube> that's my line
<mrvn> whitequark: it assumes certain input that might not be valid in reality
<whitequark> I mean, I have no doubt miscompilations exist. but saying "-O2 is supposed to be safe" is misleading. it's pure guesswork. compiler voodoo.
<whitequark> mrvn: hmm, link?
<mrvn> whitequark: /dev/brain
<companion_cube> I can testify on this
<companion_cube> (don't compile mysql with -O3)
* whitequark waves
<whitequark> it's probably pointless to use -O3 for ocamlrun anyway, as someone noted up in the log
<mrvn> What is true anyway is that source frequently doesn't work with -O3.
<mrvn> Also -O3 is frequently slower.
<whitequark> let's just replace gcc with clang
<ggole> Mmm, I've heard that -O2 and -Os are sometimes better.
<mrvn> Use it for specific files where you profiled that it helps and checked that it works.
<ggole> Frankly, if -Os is faster then gcc needs to adjust its expense heuristics.
<whitequark> ggole: cache effects, I guess
<mrvn> ggole: it's a heuristic. Not even tuned to your current cpu.
<ggole> And trace effects maybe
<whitequark> trace effects?
<mrvn> Also the effect of env and stack randomization is frequently bigger than the difference between -Os, -O2 and -O3 making it hard to measure.
nikki93 has joined #ocaml
<mrvn> Then you have scheduling and page coloring throwing of your measurements further.
<ggole> A trace is like a small internal cache of decoded+fused microops
<whitequark> oh, right
<ggole> When you execute out of the trace, which is only really possible for smallish loops, there's no memory traffic at all for those instructions
<ggole> Coloring can definitely have an effect, but it is hard to predict :(
<whitequark> I think it's mostly lack of decoding that matters, L1 is just 1 cycle away usually
<ggole> And caches of very small associativity have been slowly going away
<mrvn> ggole: I heard linux doesn't use page coloring. Other kernels do and you get consistent colors for your memory.
<gasche> Printf.printf "%10.5d" 123;;
<ggole> Oh, page coloring
<ggole> I thought you were talking about cache lines
<mrvn> ggole: yes.
<mrvn> ggole: The color of a page says which cache line it goes to.
<ggole> That is all transparent to the kernel.
<mrvn> ggole: if you get pages that all go to the same cache lines then your code will crawl. If you get pages all different colors then it will be fast.
<whitequark> apparently the kernel can lay them out in a way which affects cache placement
<whitequark> (also, thank you wikipedia, a picture with japanese captions is very helpful)
<ggole> It's not usually done per page afaik
<companion_cube> gasche: we need an ocaml bot :)
<ggole> But yeah, I can imagine there being an effect
<mrvn> ggole: no. but it is done so that physical consecutive pages don't end up in the same slot(s).
<gasche> companion_cube: is it actually a good idea to use the identity as a hash function on integers?
<ousado> many do
<ousado> use identity
<mrvn> ggole: e.g. the cache line is (addr / 64 mod 1024). So every 16 pages it repeats.
<NoNNaN> nicoo: take a look at Peter Sestoft: Microbenchmarks in Java and C#. Lecture notes, September 2013-> http://www.itu.dk/people/sestoft/papers/benchmarking.pdf and also https://github.com/jackfoxy/DS_Benchmark and http://jackfoxy.com/benchmarking-f-data-structures-introduction/
<ggole> mrvn: right, you get bits from both smaller and larger granularity than a page contributing to the color
<whitequark> gasche: if the integer comes from user input, it's an exploitable DoS
<gasche> indeed
<ggole> hash=id is a pretty common trick
<gasche> but then the standard Hash function also is an exploitable DoS
<mrvn> ggole: Now if you get only every 16th page mapped into your address space then you can only use 1/16th of the cache.
<nicoo> NoNNaN: Thanks :D
<Drup> do you know about the story behind the name of php functions ? :D
<gasche> I was wondering whether identity hashing specifically was considered good practice (barring security considerations, obviously)
<nicoo> Drup: Yes, unfortunately ;_;
<whitequark> gasche: not if it's randomized. create ~random:true fixes the vuln
<gasche> nope
<whitequark> how so?
<companion_cube> gasche: why not? what do you use?
<nicoo> gasche: They had hash = String.length
<companion_cube> it's good for instance if you use Hashtbls as sparse arrays
<whitequark> ooooooh, got it
<gasche> I don't know, I don't know much about hashing
<mrvn> gasche: makes it predictable and reversible. xoring with a random and using modulo prime makes it better.
<nicoo> So they picked function names (for the API that still exists today) with all different lengths
<companion_cube> nicoo: php is such a good source of laugh
<whitequark> it's only for default Hashtbl. if you make your own and supply hash = id, it's sill broken
<nicoo> companion_cube: Yup
<Drup> in order to be nicely distributed in the hashtabl of symbols
<ggole> Picking a strong hash functions for ints tends to make hash tables considerably slower on "friendly" input
<whitequark> I knew a guy once who was writing an OS in assembly
<Drup> (poor soul)
<ggole> It certainly did when I was playing around with hash table designs
<companion_cube> would hash=id be a DDos opportunitiy even with seeded hash?
<whitequark> he manually laid out the keyboard driver so that branches were sorted by the frequency of usage of the letters
<ggole> ~2x slower for an int specialised hash table
<Drup> <3
<ggole> iirc
<mrvn> ggole: imho the hash function is rather irelevant.
<whitequark> companion_cube: hash=id is incompatible with seeded_hash because the latter requires hash=seeded_hash.
<ggole> That wasn't my experience
<companion_cube> hm, right
<nicoo> companion_cube: hash = id is by definition not seeded
<ggole> For larger objects I can believe it though
<companion_cube> anyway it's only when you instantiate the Hashtbl manually that you have to specify a hash
<mrvn> ggole: actualy having a non-id function means you can craft your input so it all hashes to 0.
<companion_cube> nicoo: I'd think the result would be mixed with the seed afterwards
<companion_cube> but you're the expert on that
<mrvn> ggole: So I would say you either need identity or a randomized hash function.
<mrvn> ggole: anything other is just wastefull.
<ggole> Mmm, you can make things stack up a little bit with hash=id
<ggole> But there are only so many bits
<mrvn> ggole: depends how you select the slot and how you grow the table.
<gasche> note that the actual bucket is (hash modulo table-length)
<nicoo> companion_cube: IIRC, it's not
<nicoo> Let me check in hashtbl.ml
<gasche> you could try to defeat DoS by picking randomly the next-prime-number to use when resizing
<mrvn> gasche: say you have one bucket that is overflowing. Many people the double the table size. That means the full bucket is split in half.
<gasche> (hmm, someone could spread it)
<gasche> in any case
<ggole> Prime number table lengths mean you have to use more expensive ops to select the bucket though
<gasche> I was just wondering whether hash=id was sensible for a documentation example
<companion_cube> isn't the current hashtable using powers of 2 as size ?
<gasche> I don't care
<companion_cube> gasche: it's as simple an example I could find
<companion_cube> and one I use very often
<mrvn> Instead use the next prime, possibly with some randomness. That way the bucket gets spread out over all other buckets and it becomes verry hard to impossible to keep overflowing a bucket.
<mrvn> companion_cube: size 2^n isu verry bad.
<nicoo> companion_cube: Nope, it isn't mixed with the key. If you want something involving the seed, you should use the MakeSeeded functor and you recieve the seed
<companion_cube> nicoo: ah, I see!
<companion_cube> never looked at the seeded functor
<nicoo> gasche: Randomizing the number of buckets won't help if you can make the hash function collide
<mrvn> nicoo: hash = id can't collide
<companion_cube> it will collide after the modulo, won't it?
<nicoo> mrvn: Yup, this won't happen for hash=id.
<nicoo> companion_cube: gasche's idea was to randomise the modulus
<companion_cube> ah, sure
<mrvn> nicoo: with a randomized hash function the hash should be re-randomized on resize. Or when collisions remain on resize.
<nicoo> (It's probably easier to make it secure by using a random seed and a PRF)
<companion_cube> but it's only useful for server applications, not in AST-manipulating programs such as coq
<ggole> I wonder if servers should just use a radix table of some sort
<companion_cube> or simply balanced trees
<ggole> Then DOS is flatly impossible (through that channel)
<nicoo> mrvn: Actually, if the hash is actually a secure PRF, it is secure not to re-randomise.
<whitequark> ggole: for servers it's simple enough to use a seeded hash.
<mrvn> ggole: but then access is O(log n) and not O(1)
<mrvn> nicoo: not realy.
<ggole> Radix tables are usually O(K) for fixed K
<ggole> And they can be very fast in practice, see Judy arrays
<whitequark> then it's O(1) ? :)
<mrvn> nicoo: you could just find hash collisions by accident or brute force.
<ggole> Complex though
<NoNNaN> nicoo: if you have lots of time, take a look at cuckoo hashing too
<Drup> whitequark: more like O(K), K being the lenght of the representation of the int .. which is bounded in practice
<mrvn> ggole: K is dependent on the maximum size of your input. In practice that will be a fixed length. But in theory that is your log(n)
<Drup> so yeah, O(1), but the distinction is useful
<whitequark> ah
<nicoo> mrvn: Yes, really : you won't be finding universal collisions ; of course, the hashtbl will contain collisions because of the modulo lenght, but these will go away as soon as you resize
<NoNNaN> cuckoo has guarenteed O(1) for lookup
<ggole> mrvn: right, but we are discussing int keys
<mrvn> ggole: you don't have 512bit ints?
<nicoo> NoNNaN: Last time I looked, it sounded like cuckoo hashing had a not-so-nice constant factor
<mrvn> Lets use a list. Because the list access will take a maximum of 2^64 steps. That's O(1). can't beat that.
<companion_cube> :)
<companion_cube> ln(n) < 64, as my teacher said
<mrvn> As soon as you put an upper limit on your n pretty much everything is O(1).
<NoNNaN> nicoo: here is a recent paper for cuckoo + memcache combination: https://www.cs.cmu.edu/%7Edga/papers/memc3-nsdi2013.pdf
<mrvn> Has anyone build a hashtable that profiles its data and then builds an optimal hash functions for the next run?
<whitequark> well, lex uses gperf
<whitequark> flex*
<mrvn> Like: Hey, I get strings where byte 0 + byte 3 uniquely identify the item. Lets use that.
<ggole> mrvn: that's usually called "perfect hashing"
<mrvn> ggole: yes. if you can make it perfect.
jo` has joined #ocaml
<mrvn> some collisions would be fine though.
<nicoo> NoNNaN: Nice, I'll read over the weekend
<whitequark> whoa, my Cmm→LLVM translator has translated first nontrivial input
<Drup> well done :p
<whitequark> it's entirely fucked up
<whitequark> ;D
<mrvn> whitequark: fibonacci or factorial?
<whitequark> mrvn: integration with newton
<whitequark> (I took some random example from ocaml testsuite)
<mrvn> oh, that is a bit more complex.
<whitequark> well, yeah, it has a HOF
<mrvn> Someone check me on this: With threads in ocaml any time the GC (and therefore scheduling) is called any floats will be stuffed back in their boxes. So thread switching can totaly ignore the fpu registers. right?
* Drup imagines little gremlins popping in and out of small boxes.
<whitequark> mrvn: seems like so. asmrun/amd64.S:298
<companion_cube> mrvn: it may be simpler to analyse the hashes so far, and use this information to choose the size when resizing
<whitequark> or arm.S:94
<companion_cube> Drup: don't use floats after midnight
<Drup> companion_cube: or at least, don't compute on them :p
<whitequark> don't float after midnight
<companion_cube> don't after
<nicoo> Gimme gimme gimme a float, after midnight; won't somebody help me chase the boxes away
<mrvn> whitequark: urgs. That's not what I though. The GC explicitly saves all FP regs.
<whitequark> yep, and non-FP regs as well
* ggole waters his floats gently
<whitequark> ggole: is that an euphemism? :D
<ggole> More a reference to the after-midnight thing
<ggole> Although writing numeric code *is* a bit like gardening.
<ggole> Hmm, why does the GC save FP regs?
<companion_cube> symbolic code is the real gardening, you have trees everywhere!
<mrvn> ggole: vpush {d0-d7}
<ggole> Numeric code is the real gardening: you have tiny bugs everywhere.
<ggole> :)
<mrvn> whitequark: That just leaves the FP regs the GC itself uses. Are they caller saved or callee saved?
<whitequark> mrvn: no clue. ocaml uses a custom calling convention which is not documented
darkf has quit [Quit: Leaving]
<whitequark> you'll have to reverse-engineer it. when you do, please tell me how it works :D
<mrvn> whitequark: When the GC calls C code it follows the C calling conventions, right?
<whitequark> obviously.
<whitequark> it's UB to do otherwise.
<mrvn> Which say (ARM IHI 0042E: Oricedure Call Standard for the ARM architecture): Registers d8-d15 must be preserved across subroutine calls, registers d0-d7 do not need to be preserved. Registers d16-d31, if present, do not need to be preserved.
yacks has quit [Ping timeout: 258 seconds]
<mrvn> whitequark: That seems to fit the arm.S. It saves d0-d7 and d16-d31 aren't present.
<mrvn> That leaves me with having to save d8-d15 then I think.
<whitequark> yep
<whitequark> hang on
<whitequark> why the hell are you reading the document on AAPCS?!
<mrvn> whitequark: second hit in google.
<whitequark> you see, ARM has three calling conventions, APCS, AAPCS and EABI, and the first two are not used since approximately VAX/VMS times
<whitequark> you need... EABIHF, I guess, since that's what rpi uses
<def-lkb> :D
<whitequark> semi-related: the doc on recent microcontrollers is called ARM (the company) ARM (the family) ARM (Architecture Reference Manual) Thumb-2 (instruction set)
<whitequark> ARM ARM ARM!
<adrien_oww> they're copying apple
<adrien_oww> they should probably fire a couple marketing people :P
<whitequark> I think it's awesome
<whitequark> oh, also, their latest series consists of Cortex-A, Cortex-R and Cortex-M.
<whitequark> A-pplication, M-icrocontroller and R-ealtime
<Drup> <3
<mrvn> whitequark: Looks like AEBIHF uses: VFP hardware floating-point support using the VFP ABI, which is the VFP variant of the Procedure Call Standard for the ARM® Architecture (AAPCS). This ABI uses VFP registers to pass function arguments and return values, resulting in faster floating-point code. To use this variant, compile with -mfloat-abi=hard.
<whitequark> mrvn: sure, it may reference AAPCS. but note that the convention used is EABI, and that matters.
* Drup error 66: "sigle buffer overflow"
<whitequark> wat
<Drup> hum
<Drup> sorry, some frenchisation
* Drup error 66: "acronym buffer overflow"
<whitequark> VFP obviously means Visual FoxPro
<Drup> obviously :]
* ggole RMAs some ARM RAM
Symon has joined #ocaml
* mrvn throws an arm full of arm ARMs at ggole.
<whitequark> throwing that monstrous tome at someone, that's not nice :(
<whitequark> if it was slightly longer, it could be classified as a WMD!
<mrvn> whitequark: I'm a bit confused about the status register. I read the text so that nothing there is preserved and that 2 bits must be zero on function call and return. But the wording is strange.
<whitequark> 2 bits? which?
<mrvn> The length and stride bits must be zero on entry to and return from a public interface.
<mrvn> bits 16-18 and 20-21
studybot has joined #ocaml
<whitequark> 
<whitequark> The N, Z, C, V and Q bits (bits 27-31) and the GE[3:0] bits (bits 16-19) are undefined on entry to or return
<whitequark> from a public interface. The Q and GE[3:0] bits may only be modified when executing on a processor where
<whitequark> these features are present.
<whitequark> 16-18 is undefined then
studybot_ has joined #ocaml
<mrvn> "The odd part is "The exception-control bits (8-12), rounding mode bits (22-23) and flush-to-zero bits (24) may be modified by calls to specific support functions that affect the global state of the application." That means they aren't preserved, right?
<whitequark> ohhh, I know what this refers to
<nicoo> whitequark: WMD = Words of Monstruous Dullness ?
<whitequark> nicoo: weapons of mass destruction, but that works as well
<mrvn> whitequark: next page. VFP registers.
<whitequark> this thing.
<whitequark> I don't think OCaml has an interface to modify them
<whitequark> therefore, you don't need to care about them
Symon has left #ocaml []
<mrvn> whitequark: changing them for one thread only seems insane.
<nicoo> NoNNaN: Regarding benchmarking.pdf, it seems that then bench library already implements that
studybot has quit [Ping timeout: 240 seconds]
<whitequark> that too
ygrek has joined #ocaml
<whitequark> well, you'd only want to change them for a lexical scope
<nicoo> s/then/the/
<mrvn> whitequark: sounds like I can ignore that register on thread switch.
<mrvn> any function messing with those bits has to reset them before calling the GC.
<whitequark> hmmm, I disagree
<whitequark> a sane pattern would be to remember them after entry and restore before exit
<mrvn> Note: The GC uses floating point.
<whitequark> with_fptrunc TruncToZero (fun () -> ...)
<mrvn> whitequark: If you change those bits you change the behaviour of the GC.
<whitequark> so you probably want to do that for GC as well.
<whitequark> sure. make the GC push the old state, and restore the proper behavior
<whitequark> hang on, what?
<whitequark> why does GC poke floats?!
<mrvn> whitequark: iirc just for the statistics. Because 31bit ints would overflow.
<whitequark> ahh, right
<whitequark> still, my suggestion remains
<mrvn> whitequark: what suggestion? The GC should save the bits and set sane defaults for itself?
<whitequark> yep
<ggole> Wouldn't an Int64.t be cleaner?
<ggole> (And it would save the necessity to save the float regs, although that may not matter.)
<mrvn> ggole: I would think so. But Gc.stat.minor_words is a float.
<ggole> Well, I guess the interface can't be changed now.
<ggole> Indeed.
<mrvn> Also gcc aparently can sometimes optimize code to floating point where you don't expect it.
<nicoo> ggole: AFAIK, the API can be changed between major versions of OCaml. I'm doubt it will get changed, though
<mrvn> haven't seen that myself though.
<mrvn> nicoo: someone was writing patches to that affect.
<nicoo> mrvn: Oh?
<ggole> Vectorisation can use float regs on some platforms, although it isn't hard to make code not do that.
<whitequark> ggole: I'm not sure. I think lower parts of SSE regs are aliased to x87 regs
<whitequark> can they be unaliased?
zarul has quit [Remote host closed the connection]
<mrvn> whitequark: no
<ggole> Those are the MM regs, not the XMM regs.
<ggole> The xmm regs aren't aliased to anything (except they are the bottom half of ymm regs, if those are around).
<ggole> Straightforward, no?
<whitequark> though, xmm are shared between integer vectorization and fp vectorization
<mrvn> ggole: MM being the ones for mmx, right?
<ggole> Yeah.
<whitequark> so you still have to save them
<mrvn> and xmm for sse?
<ggole> Old 2-wide vector stuff.
<ggole> You don't have to save mm regs if your float code uses xmm.
<ggole> mrvn: yep
<mrvn> ggole: as kernel you don't know what the user might use
<whitequark> yes. but you have to save xmm if your integer code uses that.
<whitequark> even if you never ever use floats
shinnya has quit [Ping timeout: 258 seconds]
<ggole> You have to save what you use, right.
<mrvn> whitequark: and gcc is smart enough to sometimes vectorize stuff into xmm registers when you didn't intend to.
<ggole> mrvn: the Linux kernel disallows use of vector regs for that reason
<ggole> They don't want to pay the cost of saving them on every context switch.
<mrvn> ggole: most kernels do
<mrvn> which is why I would want a GC that uses no floats. As is every thread always uses floats.
<ggole> It makes increasingly good sense as the number of bits in the register set increases.
<mrvn> ggole: what is it now? 32 512bit registers?
<ggole> + the mm regs + int regs + + flags + the debug/control stuff
<ggole> There are machines with less RAM than that
<whitequark> mrvn: yeah, 2K of RAM.
yacks has joined #ocaml
<mrvn> ggole: MM are 8*128?
<ggole> 8*64 I think
<mrvn> compared to 2k for sse4 regs the rest is negible.
<ggole> Sure.
* whitequark especially likes AVX instruction names. like PCLMULQDQ
<whitequark> or PCLMULLQLQDQ
<whitequark> er, it's not AVX. still
<mrvn> I like SEX - sign extend.
<ggole> Looks like it was generated with a Markov chain
<ggole> EIEIO is still the best mnemonic.
<whitequark> ggole: don't forget RVWINKLE
<ggole> A strong contender.
<whitequark> hah, the third google result for RVWINKLE is my tweet
<mrvn> And for best function I like: char *strfry(char *string); randomize a string
<nicoo> mrvn: What do you mean by *randomise* ?
<mrvn> whitequark: Nice too but strfry sounds better.
<nicoo> Ah, ok, it creates a permutation using rand()
<mrvn> The strfry() function randomizes the contents of string by using
<mrvn> rand(3) to randomly swap characters in the string. The result is an
<mrvn> anagram of string.
ygrek has quit [Ping timeout: 245 seconds]
<nicoo> Yeah, just checked the manpage
<mrvn> Damn, why didn't irssi eat the newlines there?
<nicoo> The GNU libc contains some scary stuff ...
<whitequark> ggole: I think Intel just really likes Doom cheat codes. cf.: PCLMULLQLQDQ, idspispopd
<mrvn> nicoo: indeed
zarul has joined #ocaml
zarul has quit [Changing host]
zarul has joined #ocaml
<ggole> Oh man, that string is forever burned into my memory.
<whitequark> ggole: the instruction, right? :D
<ggole> No, the DOOM thing
* ggole spent way too long playing that as a kid
ccasin has joined #ocaml
Snark has joined #ocaml
ygrek has joined #ocaml
nikki93 has quit [Ping timeout: 245 seconds]
<whitequark> gah! it appears I need to write a type inferencer for Cmm.
thomasga has joined #ocaml
nikki93 has joined #ocaml
<whitequark> it's not *actually* untyped
<whitequark> because unboxed floats :/
ddosia has quit [Remote host closed the connection]
iorivur has quit [Ping timeout: 252 seconds]
iorivur has joined #ocaml
divyanshu has quit [Ping timeout: 245 seconds]
divyanshu has joined #ocaml
pminten has quit [Remote host closed the connection]
ygrek_ has joined #ocaml
iorivur has quit [Ping timeout: 252 seconds]
tautologico has joined #ocaml
ygrek has quit [Ping timeout: 250 seconds]
<whitequark> why can Cmm accept aggregates in parameters and return them, but has no operations whatsoever to use them or pack/unpack ;_;
<whitequark> ohhh, it's not actually... aggregates. someone just used ty list instead of ty option. it just has either [ty] or [], representing unit ;_;
<Drup> ahah =')
<Drup> (I think you just found the first commit to give to upstream ocaml)
<ggole> Heh
<ggole> Wonder if that was intended for some kind of packing at one point
<whitequark> ggole: it's pointless at the level at which Cmm resides
<whitequark> in LLVM, aggregate as argument and the content of an aggregate as several arguments are identical semantically
<whitequark> (this is how C ABI works)
<whitequark> that's unrelated to ABI though
ygrek_ has quit [Ping timeout: 265 seconds]
<whitequark> ABI clearly specifies that passing f(struct { int x, y; }) in arguments is exactly same as passing f(x, y)
<tautologico> isn't there an option to get better backtraces?
<ggole> whitequark: that can actually be a loss in some situations
<whitequark> better ?
<mrvn> whitequark: not on every arch. Only those with structs in registers.
<whitequark> mrvn: sure
<whitequark> ggole: how so?
<ggole> If you split a struct between regs and stack, and you then have to pass an address to it, you end up having to construct a useless copy just to get everything in memory.
<whitequark> oh, right
<mrvn> whitequark: args on stack might also be passed with different size than int. But I think that is theoretically.
<ggole> (Not a huge deal, really.)
<mrvn> ggole: is that allowed? aren't only small structs passed in regs?
<whitequark> ggole: it's actually subtly broken in LLVM anyway
<whitequark> and clang does some weird crap with memcpy in caller, I think
<whitequark> I recall rust guys telling me.
<ggole> I get the impression that if it works for clang, it's considered fine.
<whitequark> unfortunately, that's how it is at the moment.
iorivur has joined #ocaml
<whitequark> there is some push to make it more generic, but it's a bit of a chicken-and-egg problem.
jave has quit [Ping timeout: 265 seconds]
jave has joined #ocaml
Submarine has quit [Quit: Leaving]
<Drup> tautologico: OCAMLPARAM=b, iirc
<whitequark> OCAMLRUNPARAM
<tautologico> Drup: yes, something like it, but I think there's another way to turn it on
<whitequark> OCAMLPARAM is to make ocaml{c,opt} soak options from command line
<whitequark> tautologico: Printexc.record_backtraces true; ?
<whitequark> backtrace*
<tautologico> yeah maybe
tlockney_away is now known as tlockney
oriba has joined #ocaml
rgrinberg has joined #ocaml
iorivur has quit [Ping timeout: 245 seconds]
<whitequark> \o/ inferencer works
dsheets has joined #ocaml
ocp has joined #ocaml
dsheets has quit [Client Quit]
dsheets has joined #ocaml
rgrinberg1 has joined #ocaml
rgrinberg has quit [Ping timeout: 245 seconds]
rgrinberg1 has quit [Quit: Leaving.]
rgrinberg has joined #ocaml
rgrinberg has quit [Ping timeout: 265 seconds]
<ggole> Dammit.
<ggole> Mouse traps should have a thing you can adjust to make them more sensitive.
<Drup> ggole: murderer
<ggole> I'd be happy for them to just peacefully leave. :/
Don_Pellegrino has quit [Quit: Konversation terminated!]
Don_Pellegrino has joined #ocaml
<tautologico> send your diplomats to negotiate with them
<mrvn> ggole: Did a mouse steal your cheese?
<Drup> as with french people, equip the diplomats with cheese
<ggole> Well, it was peanut butter.
<ggole> Lil guy probably loved it.
rgrinberg has joined #ocaml
jwatzman|work has joined #ocaml
Simn has joined #ocaml
michel_mno is now known as michel_mno_afk
jo` has quit [Ping timeout: 265 seconds]
oriba has quit [Remote host closed the connection]
ocp has quit [Quit: Leaving.]
tobiasBora has quit [Quit: Konversation terminated!]
WraithM has quit [Ping timeout: 258 seconds]
rz has quit [Quit: Ex-Chat]
WraithM has joined #ocaml
nikki93 has quit [Ping timeout: 276 seconds]
zpe has quit [Remote host closed the connection]
zpe has joined #ocaml
zpe_ has joined #ocaml
zpe has quit [Read error: Connection reset by peer]
malo has joined #ocaml
avsm has joined #ocaml
rgrinberg has quit [Quit: Leaving.]
nikki93 has joined #ocaml
Submarine has joined #ocaml
Submarine has quit [Changing host]
Submarine has joined #ocaml
rgrinberg has joined #ocaml
Kakadu has quit [Quit: Page closed]
nikki93 has quit [Ping timeout: 258 seconds]
avsm has quit [Quit: Leaving.]
ygrek has joined #ocaml
<Drup> AltGr: it's not possible to give a local file or a git path for a compiler, is it intended ?
<AltGr> with my patch ?
<Drup> in general
<Drup> or I didn't figure out how to do it
<AltGr> it should be allowed I think
<Drup> by opam never accepted me to put a git path or a local file there, only an archive on the web
<Drup> but*
<AltGr> git: "https://github.com/ocaml/ocaml#safe-string" a l'air de marcher
<Drup> huuum
<AltGr> et 'git://...' aussi avec ma PR de tout à l'heure
<Drup> (french :p)
<AltGr> err sorry
<Drup> maybe I tried "src:"
<Drup> and assumed it was going to figure it out with the .git
<AltGr> that's what happens on a friday evening when you speak another language at the same time :)
<AltGr> didn't even notice
<AltGr> 'git: "url#branch"' should work
<AltGr> and git://, etc alwo with one of today's PRs
<Drup> nice
<Drup> and for local file, only with file:// ?
<Drup> (and the PR)
<Drup> (corollary : when can we expect pinned compilers ? :D)
<AltGr> file:// should be the default
<Drup> nice
<Drup> ping nicoo ^
<AltGr> when I get time to rebuild my compilers-as-packages branch
<AltGr> Bye !
<Drup> thanks :)
AltGr has left #ocaml []
claudiuc has quit [Remote host closed the connection]
claudiuc has joined #ocaml
claudiuc has quit [Ping timeout: 240 seconds]
dapz has joined #ocaml
jo` has joined #ocaml
iorivur has joined #ocaml
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
keen_____ has joined #ocaml
clan has joined #ocaml
keen____ has quit [Ping timeout: 265 seconds]
WraithM has quit [Ping timeout: 252 seconds]
thomasga has quit [Quit: Leaving.]
Kakadu has joined #ocaml
zpe_ has quit [Remote host closed the connection]
zpe has joined #ocaml
zpe has quit [Ping timeout: 265 seconds]
iorivur has quit [Ping timeout: 240 seconds]
elfring has joined #ocaml
dapz has joined #ocaml
so has joined #ocaml
rgrinberg has quit [Quit: Leaving.]
jo` has quit [Ping timeout: 240 seconds]
Arsenik has quit [Remote host closed the connection]
WraithM has joined #ocaml
rgrinberg has joined #ocaml
ygrek has quit [Ping timeout: 265 seconds]
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
dapz has joined #ocaml
olasd is now known as debian|olasd
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
dapz has joined #ocaml
Zerker has joined #ocaml
zpe has joined #ocaml
zpe has quit [Ping timeout: 250 seconds]
debian|olasd is now known as olasd
lostcuaz has joined #ocaml
Kakadu has quit [Remote host closed the connection]
Kakadu has joined #ocaml
divyanshu has quit [Quit: Computer has gone to sleep.]
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
clan has quit [Quit: clan]
clan has joined #ocaml
nikki93 has joined #ocaml
kakadu_ has joined #ocaml
Zerker has quit [Quit: Colloquy for iPad - Timeout (10 minutes)]
Kakadu has quit [Ping timeout: 240 seconds]
Submarine has quit [Remote host closed the connection]
jao has joined #ocaml
jao has quit [Changing host]
jao has joined #ocaml
wolfnn has quit [Quit: Leaving.]
claudiuc has joined #ocaml
axiles has quit [Remote host closed the connection]
claudiuc_ has joined #ocaml
claudiuc has quit [Ping timeout: 258 seconds]
rgrinberg has quit [Quit: Leaving.]
saml has quit [Remote host closed the connection]
saml has joined #ocaml
hyperboreean has quit [Ping timeout: 240 seconds]
rgrinberg has joined #ocaml
ggole has quit []
elfring has quit [Quit: Konversation terminated!]
ollehar has joined #ocaml
hyperboreean has joined #ocaml
shinnya has joined #ocaml
<bernardofpc> gasche: "Its type" (line 5) on your format6 post
<bernardofpc> "ten-parameter GADT functions" (no s in plural adjective)
dapz has joined #ocaml
yacks has quit [Ping timeout: 258 seconds]
dapz has quit [Client Quit]
thomasga has joined #ocaml
thomasga has quit [Client Quit]
kakadu_ has quit [Quit: Konversation terminated!]
marr has joined #ocaml
WraithM has quit [Ping timeout: 250 seconds]
johnf has joined #ocaml
saml has quit [Quit: Leaving]
<johnf> I'm trying to sort out what this does "let tyi = `int;;" I can't seem to find any documentation and the type is [< 'int] which is equally mysterious to me.
<johnf> any hints would be great.
<Drup> be aware it's something a bit advanced in ocaml's type system
<johnf> great thanks!
<bernardofpc> It does necessarily correspond to the actual chronological evolution of format types -> does or does not ??
dapz has joined #ocaml
Zerker has joined #ocaml
dapz has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
rz has joined #ocaml
<tautologico> bernardofpc: I think it's "does not"... this confused me too
nikki93 has quit [Remote host closed the connection]
Simn has quit [Quit: Leaving]
Zerker has quit [Quit: Colloquy for iPad - Timeout (10 minutes)]
<bernardofpc> the 'c parameter looks suspiciously like "the result type of printf-lie functions -> printf-like I thing (hl gasche )
lostcuaz has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ollehar has quit [Ping timeout: 276 seconds]
WraithM has joined #ocaml
marr has quit [Ping timeout: 240 seconds]
darkf has joined #ocaml
madroach has quit [Ping timeout: 252 seconds]
madroach has joined #ocaml
maattdd has quit [Ping timeout: 240 seconds]
NoNNaN has quit [Remote host closed the connection]
NoNNaN has joined #ocaml
tlockney is now known as tlockney_away
zzing has joined #ocaml
<tautologico> anyone using the multiple versions of quickcheck for ocaml?
<tautologico> one of the