systems changed the topic of #ocaml to: OCaml 3.07 ! -- Archive of Caml Weekly News: http://pauillac.inria.fr/~aschmitt/cwn , A tutorial: http://merjis.com/richj/computers/ocaml/tutorial/ , A free book: http://cristal.inria.fr/~remy/cours/appsem, Mailing List (best ml ever for any computer language): http://caml.inria.fr/bin/wilma/caml-list
phj has joined #ocaml
Vekza has joined #ocaml
noss has quit ["hej då"]
phj has quit [Read error: 104 (Connection reset by peer)]
slashvar1proof] has joined #ocaml
slashvar[proof] has quit [Read error: 60 (Operation timed out)]
mrsolo has joined #ocaml
LordBrain has quit ["sleep"]
JPL-Justin has joined #ocaml
<JPL-Justin> hey guys
bk_ has joined #ocaml
<JPL-Justin> hey bk_
<JPL-Justin> what up?
<bk_> hey
<bk_> not much
<JPL-Justin> what do you do with ocaml?
<bk_> learning it :]
<JPL-Justin> ahh...
<JPL-Justin> hmm...
<JPL-Justin> a good thing to try is
<JPL-Justin> make a hashtable out of closures!
<JPL-Justin> :-D
<bk_> heh
<JPL-Justin> that's a great introductory exercise
<JPL-Justin> or a functional FIFO queue out of two stacks
<JPL-Justin> *functional stacks
<JPL-Justin> no seriously the closure hashtable is AWESOME
<JPL-Justin> it's really really easy to do too!
<JPL-Justin> :-D
<JPL-Justin> now of course, it runs in linear time so it sucks but it's still cool
<bk_> i don't think i've grasped enough of the functional paradigm yet to be able to do that
<bk_> i come from the imperative world of programming and everything is different now
<JPL-Justin> bk_: well it's actually pretty simple... do you know about closures and currying?
<JPL-Justin> yeah 2 years ago that was me
<JPL-Justin> I was like... WTF?!?!?
<JPL-Justin> :)
<JPL-Justin> now I"m like... :) :) :)
<bk_> yes, but i'm having trouble to adapt to all of that when i'm trying to write code
<bk_> making slow but steady progress tho
<JPL-Justin> well
<JPL-Justin> it turns out a functional hashtable is really really simple :)
<JPL-Justin> using closures
<JPL-Justin> you can do it with no actual data structures, just one funciton :)
<bk_> oh hm
<JPL-Justin> yeah it turns out the answer is really simple if you have done currying and closures
<JPL-Justin> :)
<Smerdyakov> JPL-Justin, do you really mean "hashtable" or just "mapping"?
<shrimpx> JPL-Justin: that's not really a _hash_ table :)
<shrimpx> hah
<JPL-Justin> shrimpx: well it can be a hashtable
<JPL-Justin> shrimpx: or well it can use hashing if you like to make it faster
<JPL-Justin> I think i should have said mapping
<JPL-Justin> I believe that a foolish TA in a class I had 2 years ago said hashtable imprecisely and I misrememebered
<JPL-Justin> I yield the point
<Smerdyakov> JPL-Justin, you can't implement a hash table without using arrays, essentially.
<Smerdyakov> JPL-Justin, you won't be able to get expected constant time for both insertions and lookups.
<JPL-Justin> well I did say it wasn't constant time
<JPL-Justin> however even hashtables are only KINDA constant time
<JPL-Justin> you have to keep rehashing them to make them constant time, and rehashing is linear time
<Smerdyakov> Like I said, _expected_ constant time.
<JPL-Justin> ko
<JPL-Justin> ok
<JPL-Justin> I don't usually use that particular term
<Smerdyakov> Rehashing keeps the amortized expected time constant.
<JPL-Justin> is it really amortized constant time... let me work out the maths
<JPL-Justin> yes of course
<JPL-Justin> wait
<JPL-Justin> I don't think it's constant time man
<Smerdyakov> Yes, I agree with you and disagree with my last assertion.
<JPL-Justin> hmmm lets see what the time actually is
<JPL-Justin> basically if when you rehash, you double size
<Smerdyakov> We generally consider hash tables of fixed size, though.
<JPL-Justin> I thought it was constant load factor
<JPL-Justin> access time is linear in the n if the size is constant
<Smerdyakov> Analysis of expected time for hash table operations is usually done with a fixed size table.
<JPL-Justin> because it's linear in load vactor
Meta has quit [Read error: 60 (Operation timed out)]
<JPL-Justin> fi you mean fixed size in the sense of buckets
<JPL-Justin> then that means load factor is proportional to n, and since access/insertion is proportional to load factor, it's also linear time
<Smerdyakov> Fixed number of buckets. Maximum number of insertions allowed fixed based on number of buckets.
<JPL-Justin> dude
<JPL-Justin> even if it's a "maximum"
<JPL-Justin> the fact that it's changing
<JPL-Justin> makes it proportional to the number that are in it
<JPL-Justin> if you start with 0 in it
<Smerdyakov> Would you mind rephrasing that?
<JPL-Justin> okay sure
<JPL-Justin> AFAIK most hashtables are set up as
<JPL-Justin> buckets that can hold multiple items
<JPL-Justin> assume uniform hashing
<JPL-Justin> Lets say you allow a max load factor of 5
<JPL-Justin> and the size is, say, 1000
<JPL-Justin> that means total 5000 things in it
<JPL-Justin> now, when there are 1000 things in it, on average each bucket contains one thing
<JPL-Justin> so you only have to search through one thing in the bucket to retrieve
<JPL-Justin> however if there's 2000 things in there, that's 2 in each bucket you search through on average
<JPL-Justin> and 3000 = 3 and so on
<JPL-Justin> so it's proportional to n/1000 which is proportional to n, which is linear time
<JPL-Justin> IIRC hashtables are analysed for constant load factor, and not counting the hash costs because they are small
<bk_> hm i seem to be using closures implicitly all the time
<JPL-Justin> bk_: great
<JPL-Justin> bk_: the way to do a "mapping" with closures is realy cool. give it a try yourself
<JPL-Justin> and ask the chan if it's a bugger
<mrsolo> hi
<mrsolo> traffice here!
Meta has joined #ocaml
<shrimpx> that way to implement maps is definitely cool :)
<JPL-Justin> he mrsolo
<Smerdyakov> JPL-Justin, the classic result goes something like this, with some details elided because I don't remember them: If you have a randomly chosen member of a universal hash family and you do some number insertions (based on a reasonable function of hash table size) into a hash table, the expected time per insertion is constant.
<mrsolo> what's learning curve for ocaml?
<shrimpx> hudak has stuff like that in his haskell book
<mrsolo> and where is the language going?
<JPL-Justin> Smerdyakov: that's only true if load factor is constant
<JPL-Justin> it's actually linear in load factor
<JPL-Justin> as you can see
<Smerdyakov> JPL-Justin, I told you that we set a limit on the number of elements to insert based on the table size...
<JPL-Justin> but...
<JPL-Justin> *THE LOAD FACTOR CHANGES OVER TIME*
<JPL-Justin> start with 0 in there
<JPL-Justin> that's 0 load factor
<JPL-Justin> p8ut 1000 in there based on the saize
<JPL-Justin> and that's 1000/size for load factor
<JPL-Justin> etc
<JPL-Justin> because the chance of a collision goes up over time
<Smerdyakov> So you limit the number of insertions such that the expected load factor stays low.
<JPL-Justin> uh...
<JPL-Justin> would you not contend that
<JPL-Justin> p = probability of collision as a function of n and size
<JPL-Justin> p is approximately equal to n / size
reltuk has left #ocaml []
<Smerdyakov> No
<JPL-Justin> would you not also agree that the expected time that an insertion takes is AT LEAST linearly proportional to p?
<JPL-Justin> ?
<JPL-Justin> how so?
<Smerdyakov> n / size will be greater than one. :P
<JPL-Justin> okay
<JPL-Justin> will you agree that
<JPL-Justin> the time taken to insert is proportional to the load factor?
<Smerdyakov> Yes
<JPL-Justin> and would you agree that load factor is linearly proportional to n if size is constnat/
<Smerdyakov> What is n?
<JPL-Justin> n is the number of items already in there
<JPL-Justin> how many you've previously inserted
<JPL-Justin> size is the number of buckets
<JPL-Justin> load factor = n/size
<Smerdyakov> OK
<JPL-Justin> so that means that insert is therefore linearly proportional to n, plus the hashing constant
<JPL-Justin> therefore hashtables are only constant time whenever load factor is constant, hence the idea of rehashing
<Smerdyakov> You don't need to rehash if you choose a large enough hash table.
<JPL-Justin> Smerdyakov: if you choose a large enought able, you don't even have to hash
<JPL-Justin> you can just use an array!
<JPL-Justin> dude
<Smerdyakov> And the nice result here is that you can choose a table much smaller than the whole space of possible keys and _still_ get expected constant time for all operations.
<JPL-Justin> but
<JPL-Justin> load factor is not constant without rehashing
<JPL-Justin> so...
<JPL-Justin> you're linear time man
<Smerdyakov> No, it works like this:
<JPL-Justin> I don't understand how you can have constant time access when load factor is not constant
<Smerdyakov> We fix how many items will ever be inserted.
<JPL-Justin> load factor changes, so does access time
<JPL-Justin> that doesn't matter
<Smerdyakov> _Then_ we choose the table size.
<JPL-Justin> okay look
<JPL-Justin> that doesn't change the fact that
<JPL-Justin> the amount of time to do an insert
<JPL-Justin> changes as you add more things
<Smerdyakov> The expected load factor will be constant, independent of the number of items we insert.
<JPL-Justin> even if the talbe is as big as it ever needs to be
<Smerdyakov> It changes, but it remains bounded by a constant that does not depend on the input size.
<JPL-Justin> clearly it takes longer to insert in a "full" table than an empty table, or to at least retrieve it
<JPL-Justin> yes it does change by the amount that's in there
<Smerdyakov> Right, but the chance of getting a full table is miniscule.
<JPL-Justin> okay let me reformulate
<Smerdyakov> Do you know what mean by "expected time"?
<JPL-Justin> at any give time "n" is the number of itmes ALREADY INSERTED
<JPL-Justin> now, I know for SURE that
<JPL-Justin> if you request something from the table
<JPL-Justin> after "n" has been inserted, and you never rehash
<JPL-Justin> it's linear in n
<Smerdyakov> Luckily, no.
<JPL-Justin> plus the hashing constant
<JPL-Justin> ???
<JPL-Justin> how not?
<JPL-Justin> you have to look through a linked list
<JPL-Justin> that's linear
<JPL-Justin> you have a bucket with a linked list
<Smerdyakov> With random distribution on inputs plus a universal hash family, hardly any collisions occur with a reasonably sized table.
<JPL-Justin> you said before
<JPL-Justin> "table much smaller than the whole spac eof possible keys"
<JPL-Justin> so what you're really saying is that this is only true when
<JPL-Justin> the space of possible keys is
<JPL-Justin> much much much larger than
<JPL-Justin> the number of keys you actually use
<JPL-Justin> which is true sometimes
<JPL-Justin> you said before about load factors being greater than one
<Smerdyakov> It's almost always true.
<JPL-Justin> I still contend that
<Smerdyakov> Take hashing strings, for instance.
<JPL-Justin> yeah
<Smerdyakov> You use nowhere near the total number of strings.
<JPL-Justin> yeah yeah
<JPL-Justin> anyways
<JPL-Justin> I would use "expected" to mean "average"
<JPL-Justin> are you using it to mean "best possible case"
<JPL-Justin> ?
<shrimpx> Assume that you can predict the future. Pick a table size equal to the number of items that will be inserted and devise a perfect hash function.
<Smerdyakov> No, I mean "average" as well.
<JPL-Justin> just because the amount of change is small doesn't mean it's linear
<Smerdyakov> Look, JPL-Justin, you are arguing with CLRS at this point. :P
<JPL-Justin> Smerdyakov: I don't mind :)
<JPL-Justin> Smerdyakov: Are you contending there will be no collissions?
<Smerdyakov> The expected number of collisions will be negligible.
<JPL-Justin> so?
<JPL-Justin> just because it's small doesn't mean it's not there
<JPL-Justin> average case has tot ake those htings into account
<JPL-Justin> because it changes things
<Smerdyakov> If it does not vary with increasing number of items to be inserted, then we can bound the expected time for an insertion with a constant.
<JPL-Justin> slightly but the change is quantifyable
<JPL-Justin> it does vary!
<JPL-Justin> I contend that when there's less stuff in it
<Smerdyakov> No, the nice thing is that you can make it not vary.
<JPL-Justin> uh... how?
<Smerdyakov> You have to make a larger hash table.
<JPL-Justin> you just said the table is fixed size!
<JPL-Justin> that is, its size is totally fixed
<JPL-Justin> like never changable
<JPL-Justin> as in constant
<Smerdyakov> It doesn't change during execution.
<Smerdyakov> It's fixed as a function of the number of input items.
<JPL-Justin> right
<JPL-Justin> but... what I'm saying is not that this changes in the number of input items
<JPL-Justin> I"m saying this changes in how far you've gotten in your input!
<JPL-Justin> lets call number of total input items nmax
<Smerdyakov> It does change, but we can BOUND how high we expect it to get.
<JPL-Justin> I'm not saying it's linearin nmax, obviously not if you can control the size of the table
<JPL-Justin> but it still increases linearly with n
<JPL-Justin> yes it's bounded, as is everything in computer science because computers contain finite information
<JPL-Justin> everything 'cept maybe halting problem type stuff
<JPL-Justin> it doesn't chang ethe fact that it is not constant time
<JPL-Justin> for instance I can tell you that
<JPL-Justin> solving a travelling salesman problem of n cities will take
<JPL-Justin> some bound amound of time
<JPL-Justin> that doesn't make it constant!
<Smerdyakov> OK, I poked around on the web. I will now state the result more formally:
<JPL-Justin> I bet I'll agree with it
<Smerdyakov> Let n items to insert be given. Create a hash table of size n, using a hash function from a universal hash family. The expected time per insertion is O(1).
<Smerdyakov> Also, expected O(1) time for any lookup.
<JPL-Justin> okay allow me to ask a question
<JPL-Justin> this may be the misunderstanding I am having
<JPL-Justin> let us say that to calculate "expected time" you average m different lookups
<JPL-Justin> is m >> n, about n, or << n
<Smerdyakov> We average _all_ possible lookups.
<JPL-Justin> but you can't do that
<JPL-Justin> because...
<JPL-Justin> the distribution of when the lookups are going to be made
<JPL-Justin> in regards to insertions
<JPL-Justin> is unknown!
<JPL-Justin> if you assume that you start with everything inserted before you do all lookups, then I agree
<Smerdyakov> We consider that they'll be made uniformly at random.
<JPL-Justin> okay
<JPL-Justin> clearly then the difference here is that
<Smerdyakov> Or maybe we don't need to... /me thinks
<JPL-Justin> this definition is not what I would ever use to describe it
<JPL-Justin> here's why
<JPL-Justin> I am usually interested in the amount of time it takes to put data in or get it out of a data structure
<JPL-Justin> depending on HOW MUCH IS IN THERE
<JPL-Justin> in this case, that's linear
<JPL-Justin> now HOW MUCH IT CAN HOLD
<JPL-Justin> *not
<Smerdyakov> The expected amount that will be in any given bucket is constant.
<JPL-Justin> no...
<JPL-Justin> it changes as you add more!
<JPL-Justin> the more you put in
<JPL-Justin> the more there are there in each bucket!
<Smerdyakov> Nope.
<Smerdyakov> The expected amount in any bucket is 1.
<JPL-Justin> ??????????
<JPL-Justin> no
<Smerdyakov> Yes.
<JPL-Justin> only AFTER insertion
<JPL-Justin> look man
<JPL-Justin> at some point in the program
<JPL-Justin> there are 0 things in each bucket
<JPL-Justin> nothing
<JPL-Justin> nada
<JPL-Justin> zilch
<Smerdyakov> Before insertion, the expected amount is less than 1.
<JPL-Justin> the expected amount is 0
<Smerdyakov> Lookup can only be _easier_ before insertion.
<JPL-Justin> because you've never inserted anything
<JPL-Justin> right
<JPL-Justin> and I"m saying it gets harder as the program goes on
<JPL-Justin> and does more insertion
<Smerdyakov> No, it won't be 0.
<JPL-Justin> linearly scaling!
<JPL-Justin> ?
<JPL-Justin> dude
<Smerdyakov> After x insertions, the expected amount per bucket is x/n.
<JPL-Justin> if there's nothing in the table
<JPL-Justin> yes
<JPL-Justin> I said before insertion
<JPL-Justin> the program starts with nothing
<JPL-Justin> in it
<Smerdyakov> Right
<JPL-Justin> so we start at load factor 0
<JPL-Justin> and end up with load factor 1
<Smerdyakov> Yup
<JPL-Justin> and if we do a lookup at any given time, that lookup time
<JPL-Justin> is proportional to the load factor
<JPL-Justin> so... if ti's half full, it will take on average half the time
<Smerdyakov> Right, but no lookup will be _worse_ than it would be after all insertions are done.
<JPL-Justin> yes I undrestand that
<JPL-Justin> I'm not saying it ever gets worse
<JPL-Justin> than at the end
<JPL-Justin> I"m just saying HOW BAD IT IS DEPENDS ON HOW MANY HAVE BEEN INSERTED
<JPL-Justin> which, IMHO, is what's important about a data structure
<JPL-Justin> how much time do you expect on a lookup, given how many are inside
<JPL-Justin> in this case, that's clearly linear
<Smerdyakov> And this is O(1).
<JPL-Justin> no
Meta_ has joined #ocaml
<JPL-Justin> look
<Smerdyakov> Do you know what O() means?
<JPL-Justin> Yes
<Smerdyakov> It's an upper bound.
<Smerdyakov> Not a tight bound.
<JPL-Justin> that's best case
<JPL-Justin> *worst case
<Smerdyakov> No, I'm only talking about expected case here.
<JPL-Justin> hmm
<JPL-Justin> O() is upper bound
<JPL-Justin> which is worst case
<Smerdyakov> No.
<JPL-Justin> unless you mean worst case in terms of hashing
<mrsolo> hmm
<Smerdyakov> O() is a notion independent of any specific property like that.
<Smerdyakov> It applies to functions.
<JPL-Justin> uh not in my discrete math book
<JPL-Justin> but
<JPL-Justin> sure
<JPL-Justin> you can use it in that sense
<JPL-Justin> basically you are saying
<JPL-Justin> IIRC...
<JPL-Justin> G(x) is O(f(x) if
<JPL-Justin> as x approaches infinity
<JPL-Justin> then g(x) is approximately equal to k*f(x) for some constant k
<JPL-Justin> that is, the difference between them is nonincreasing
<JPL-Justin> after some value x
<Smerdyakov> No. Past a certain point, g(x) is less than k*f(x) for some fixed k.
<JPL-Justin> however... to be honest
<JPL-Justin> how is that different?
<Smerdyakov> The difference can increase continually.
<JPL-Justin> can you give me a function that meets your criteria but not mine, or vice versa?
smklsmkl has quit [Connection timed out]
<Smerdyakov> g can be constant-0 and f can be the identity function.
<Smerdyakov> Meets mine but not yours.
smklsmkl has joined #ocaml
<mrsolo> jpl quoted the definition of O()
<JPL-Justin> okay so doesn't that mean that
<JPL-Justin> you're saying g(x) is O(f(x)) undre those conditions
<JPL-Justin> meaning that
<JPL-Justin> oh hmmm
<JPL-Justin> okay
<JPL-Justin> so doesn't that mean you can say
<JPL-Justin> that access time on an array is O(n^n)?
<Smerdyakov> Yes
<JPL-Justin> Alrighty then
<JPL-Justin> I'll say that access time on hashtable is O(n)
<JPL-Justin> :-P
<mrsolo> O(1)
<Smerdyakov> And you'd be right, and it's also expected O(1).
<JPL-Justin> wait
<JPL-Justin> a minute
<JPL-Justin> when you say expected O()
<JPL-Justin> you are modifying the meaning of O()
<Smerdyakov> Clearly it's O(n). You don't even need expectation for that, since no bucket has more than n items.
<JPL-Justin> maybe you should use the theta or omicron or whateve rit is
<JPL-Justin> it's been a while
<Smerdyakov> I'm not modifying anything about O().
<mrsolo> oh hash bucket
<JPL-Justin> okay so I can say
<mrsolo> not just hash
<Smerdyakov> You normally talk about worst case, where O() applies to the function from input sizes to worst-case running time.
<JPL-Justin> it's expected O(n) and you say expected O(1) and you say both are right
<Smerdyakov> In expected time, we just apply O() to the function from input sizes to expected running time.
<Smerdyakov> JPL-Justin, yes.
<Smerdyakov> JPL-Justin, I can also say it's Theta(n), for a tight bound.
<Smerdyakov> So we have this result:
<Smerdyakov> Let n items to insert be given. Create a hash table of size n, using a hash function from a universal hash family. The expected time per insertion or lookup is Theta(1).
<Smerdyakov> Actually, the Theta only applies for lookups done after all insertions.
<Smerdyakov> We always have O(1), though.
<JPL-Justin> now
<JPL-Justin> allow me to make a bold proposition
<JPL-Justin> given a linked list
<JPL-Justin> maximum size nmax
<JPL-Justin> the time per insertion is k*n
<JPL-Justin> the maximum time of any insertion is k*nmax
<JPL-Justin> therefore if nmax is fixed
<JPL-Justin> expected insertion or lookup is O(1)
<Smerdyakov> Sure. We can always get O(1) when we limit input size.
<JPL-Justin> because for any nmax, there is an expected time bound that is finte
<JPL-Justin> yes
<JPL-Justin> which is what you did man
<JPL-Justin> you limited the input size
<Smerdyakov> Nope.
<JPL-Justin> you said
<JPL-Justin> you know the input size
<JPL-Justin> at the beginning of the program
<Smerdyakov> What I said amounts to requiring the program to declare how many items it will ever assert, before it starts inserting any.
<Smerdyakov> Any size may be declared.
<JPL-Justin> I guess what I don't like about this is that it does not capture how the data structure responds over time during the program
<JPL-Justin> okay
<JPL-Justin> well
<JPL-Justin> the thing is
<JPL-Justin> as the program goes on
<JPL-Justin> lookups take longer and longer on average
<Smerdyakov> That's OK.
<JPL-Justin> that number starts small and is nondecreasing
<JPL-Justin> so
Meta has quit [Read error: 110 (Connection timed out)]
<Smerdyakov> They're always bounded by a constant.
<JPL-Justin> and what you're saying is
<Smerdyakov> (In the expected case)
<JPL-Justin> the constant doesn't depend on the size
<Smerdyakov> Right
<JPL-Justin> I guess what I'm saying is
<JPL-Justin> that I don't see why this is the most important metric
<JPL-Justin> for a data structure
<JPL-Justin> isn't it more important, the time dependence as the program goes along?
<Smerdyakov> Why?
<Smerdyakov> If you can show things never get worse than a fast time, isn't that enough?
<JPL-Justin> it explains the dynamics of the data structure
<JPL-Justin> well
<JPL-Justin> hmm
* JPL-Justin thinks long and hard about this
<JPL-Justin> okay here's why
<JPL-Justin> I believe often times that what is important in looking at the cost of various algorithms is
<JPL-Justin> the effects of various optimizations, etc
<JPL-Justin> and... this metric does not in any way address the fact that a system that needs to be sped up could do so by getting rid of keys that are no longer used
<JPL-Justin> whereas the metric of how the access time changes iwth the number of keys does
<JPL-Justin> a lot of systems are such that there is no "fast enough", faster is better until you reach hardware limitations
<Smerdyakov> The formal analysis of algorithms doesn't deal with hardware limitations.
<JPL-Justin> the systems I work with at work are often such as this, even a magic algorithm could not move data around too fast to be good, the hardware is limted so even best possible speed could be improved noticably
<JPL-Justin> well formal analysis should deal with dynamic systems!
<JPL-Justin> and afaik it does
<Smerdyakov> I don't think you're pointing out a flaw.
<Smerdyakov> The theory does what it's meant to do.
<Smerdyakov> If your goals aren't compatible with it, you use a different theory.
<JPL-Justin> hmmm
* mrsolo agrees with smerdyyakov
<JPL-Justin> well I'll state this:
<JPL-Justin> I believe everything you've said here was correct
<JPL-Justin> and I also feel like it does not quite jive with how I intuitively describe systems... which means I need to expand my perspective a bit
<JPL-Justin> however I will also say that I've always approached CS from a bit of a practical standpoint... although I'm trying to learn category theory :)
<JPL-Justin> you make very persuasive arguments, and it has been most informative for me, and I'm definitely going to read CLRS about this so that I get this completely right
<mrsolo> CLRS?
<JPL-Justin> THE algorithm book :)
<JPL-Justin> look it up :)
<mrsolo> oh that
<mrsolo> the leaf book
<JPL-Justin> ?
<JPL-Justin> no leaf on my book maybe on yours?
m[kel[ has joined #ocaml
<mrsolo> mine is very good
<mrsolo> old
<mrsolo> like hmm 10+ years
<JPL-Justin> oh wow
<JPL-Justin> mine is new :)
<JPL-Justin> I got it for my birthday last year, I was very excited
<mrsolo> it is comp sci bible
<JPL-Justin> thanks Smerdyakov
<JPL-Justin> well, for theorists
<mrsolo> no
<JPL-Justin> for practical guys, Numerical Recipes
<JPL-Justin> can be more helpful
<JPL-Justin> in my line of work, NR is much more useful
<mrsolo> for everybody.. numerical recipes is mainly computation
<JPL-Justin> but that's because I do computational physics
<JPL-Justin> and we don't need no stinkin' treess
<JPL-Justin> I will tell you this
<JPL-Justin> the physicists I work with
<JPL-Justin> wouldn't touch CLRS
<mrsolo> generally software design use alorithm a lot
<JPL-Justin> to be honest
<JPL-Justin> all my design stuff used very few algorithms
<JPL-Justin> it's mostly like... how to make thigns reusable
smklsmkl has quit [Connection timed out]
<JPL-Justin> or get them to fit
<JPL-Justin> etc
<mrsolo> well but surely you must have come to a decision point of
<JPL-Justin> even my supervisor here says that difficult algorithmic stuff is rare, usually you use someone else's algorithm that works
<mrsolo> do i use 'hash list jumplist btree or what?'
<JPL-Justin> it's designing how your thousands of classes fit together
<JPL-Justin> eh... only for the most basic things
<JPL-Justin> and in those cases usually you don't have to have any clue how it works
<JPL-Justin> just what the time bounds are on it
<mrsolo> that's sufficient
<JPL-Justin> like... "I can use linked list because I'm storing maybe 10 things"
<JPL-Justin> yeah
<JPL-Justin> so you don't need CLRS for that
<JPL-Justin> usually API docs will tell you
<JPL-Justin> at least for Java they do
<mrsolo> well you need deeper understanding
<mrsolo> to made small modification
<JPL-Justin> yeah
<JPL-Justin> I don't do that often though
<JPL-Justin> honestly I think it's great, and want to get my masters degree in CS
<JPL-Justin> however... I don't generallly find it that useful
<JPL-Justin> it's much more exciting and intersting than useful for a career in computational physics
<mrsolo> it's something in the toolbox
<Smerdyakov> Do you have a bachelors in CS?
<JPL-Justin> no, it's applied physics
<mrsolo> sure comes handy when you needs it
<JPL-Justin> minor in CS
<JPL-Justin> but my job is software engineering
<JPL-Justin> I'm always dissapointed at the lack of algorithms...
<Smerdyakov> I'm not! I'm no good at algorithms. :)
<JPL-Justin> most of the complex stuff I do is designing things, like interacting between object heirchies and databases
<mrsolo> numerical computation is difficult
<JPL-Justin> or say image manipulation stuff in multiple dimensions
<JPL-Justin> Smerdyakov: you seem to have your math down pretty well
<JPL-Justin> mrsolo: damn straight :)
<JPL-Justin> mrsolo: I parallelized a three dimensional magnetohydrodynamical astrophysics simulation in F77
<JPL-Justin> to work on the velocity cluster at Cornell University
* Smerdyakov gags at the mention of Fortran....
* JPL-Justin smiles the tired, bitter smile of someone who did a year of programming in fortran on a paralllel machine
<JPL-Justin> damn is all I can say
<mrsolo> oh Fortran still dominate in numerical computation?
<mrsolo> there was a talk that c/c++ was supposed to take over when i was in school
<JPL-Justin> mrsolo: in my field
<JPL-Justin> hahahah
<JPL-Justin> dude these people are like 60
<JPL-Justin> and they are from russia
<mrsolo> that was the time when people started to implement numerical receipt in C/C++
<JPL-Justin> so
<JPL-Justin> :-P]
<JPL-Justin> I don't like NR that much because the code is crap
<JPL-Justin> the algorithms are great
<JPL-Justin> and one of my professors is one of the authors
<JPL-Justin> so
<JPL-Justin> I can't complain to him :)
<JPL-Justin> but I know who to blame
<JPL-Justin> the C version is a direct port of the fortran version
<JPL-Justin> the design is terrible
<mrsolo> it was written by mathmatician not computer engineer.. so :-)
<JPL-Justin> they use globals
<JPL-Justin> it's not threadsafe
<JPL-Justin> I know
<JPL-Justin> it's just... people cut and paste and make crappy programs because it's so unsafe
<JPL-Justin> even just taking a CS undergrad to help out would have odn eit
<mrsolo> do you use ocaml?
<mrsolo> for your job/work
<JPL-Justin> me?
<JPL-Justin> no :(
<JPL-Justin> I wish
<JPL-Justin> I use Java.
<JPL-Justin> Java is'nt bad, and it's getting better every day.
<bk_> hehe
<mrsolo> why are you in here then? :-)
<JPL-Justin> But... I prefer ocaml for some things, just because it's so elegant.
<mrsolo> ha
<JPL-Justin> I'd *LOVE* to see something like jython, but with ocaml instead of python
<JPL-Justin> prolly won't happen though
<JPL-Justin> because I use ocaml for my own hobby projects
<JPL-Justin> like my dataflow language
<JPL-Justin> :)
<mrsolo> i am wondering where the language is going
<Maddas> Python? O'Caml? Java?
<mrsolo> it seems to be a good replacement for writting various C extensions for perl/python/ruby
<mrsolo> ocaml
<Maddas> Why only use it as an extension language?
<mrsolo> lack of support modules
<Maddas> Somebody has to write them.
<mrsolo> sure :-)
<JPL-Justin> yeah #1 problem with ocaml is... you can't do anything in it
<JPL-Justin> #2 problem is... no one understands it
<JPL-Justin> #3 problem is... wtf it won't compile!
<JPL-Justin> :)
<Maddas> "You can't do anything in it"?
<JPL-Justin> I mean I can't o anything in it :)
<JPL-Justin> hehehe
<JPL-Justin> Maddas: I should rephrase
<mrsolo> general rule of software development 'rapid development, optimize subsection when needed'
<mrsolo> #3 it won'[t compile?
<JPL-Justin> I"m being a little silly
<JPL-Justin> what I really mean is
<JPL-Justin> #1. The library support is minimal compared ot other languages. FOr instance, the Java API does almost anything you'd ever want. OCaml's libraries are OK but even simple thigns are hard to do if they are out of the ordinary.
<JPL-Justin> #2. No one understands it, in the sense that most programmers don't undrestand that type of programming, and are too lazy/busy to learn
<JPL-Justin> #3. It's hard to make a program that compiles because there are some really weird rules in the type system
<Maddas> What weird rules?
<mrsolo> #3 is juse an excuse
<JPL-Justin> no
<JPL-Justin> I'm serious
<mrsolo> people have worked with perl
<JPL-Justin> here's an example
<JPL-Justin> okay perl sucks
<mrsolo> and got that to work
<JPL-Justin> let me just say that
<JPL-Justin> here's waht I"m esaying
<Maddas> It has lots of libraries though, and many people understand it, and the type system is very weak, so your three points don't apply there
<JPL-Justin> I had a piece of code that seemed like it should work
<JPL-Justin> but it doesn't
<JPL-Justin> Maddas... wtf are you talkiung about
<JPL-Justin> Ocaml has hardly any libraries
<JPL-Justin> and most people don't
<JPL-Justin> understand it
<Maddas> I meant Perl
<JPL-Justin> I'm not talking about perl at all
<JPL-Justin> someone else brought that up
<mrsolo> i did
<Maddas> Yes, and I talked about it.
<JPL-Justin> you said my three points don't apply? I never said they did
<mrsolo> i said that #3 is not really a point becaue people put up with perl
<JPL-Justin> I'm talking about ocaml only
<JPL-Justin> well let me finish my damn sentence here
<JPL-Justin> #3 example:
<JPL-Justin> given a funciton that takes in say
<JPL-Justin> type num = INT of int | FLOAT of float
<JPL-Justin> let myfun f x = match x with INT(i) -> f(i) | FLOAT(f) -> f(f)
<JPL-Justin> that doesn't work
<JPL-Justin> even if
<JPL-Justin> oops
<Maddas> Yes.
<JPL-Justin> rename FLOAT(f1) -> f(f1)
<JPL-Justin> even if
<JPL-Justin> f can take in an int or a float
<JPL-Justin> and return the same type
<JPL-Justin> I don't like that :(
<JPL-Justin> and sometimes it's nonobvious to people where that type rule comes from... I"m sure there's a good reason to do with type unification
<JPL-Justin> but I certainly missed that somewhere
<JPL-Justin> because they type WOULD check just fine, it's jus tthe type checker cannot do it
<Smerdyakov> What's f?
<JPL-Justin> f is a polymorphic function
<JPL-Justin> lets say
<JPL-Justin> 'a -> bool
smklsmkl has joined #ocaml
<Smerdyakov> And you're saying the code you gave doesn't typecheck?
<JPL-Justin> well
<JPL-Justin> usually I use it more like
<JPL-Justin> this
<JPL-Justin> let myfun f x y = match (x,y) with (INT i1, INT i2) -> f i1 i2 | (FLOAT f1, Float f2) -> f f1 f2 | _ -> raise (Failure "bad type")
<JPL-Justin> that doesn't typecheck
wazze has quit ["--- reality is that which, when you stop believing in it, doesn't go away ---"]
<JPL-Justin> even though there's no way it can possibly call f with bad inputs etc
m[kel[ has quit [Connection timed out]
<JPL-Justin> f could be something like (<) or (>=)
<Smerdyakov> How could it typecheck?
<Smerdyakov> f takes num's.
<JPL-Justin> no it doesn't
<Smerdyakov> You're passing it int's and float's.
<JPL-Justin> I never said it does
<JPL-Justin> f takes 'a -> bool
<Smerdyakov> No
<JPL-Justin> f is of type 'a -> bool
<JPL-Justin> I mean
<JPL-Justin> wait
<JPL-Justin> 'a -> 'a -> bool
<Smerdyakov> So that example has two different functions named f?!
<JPL-Justin> no
<JPL-Justin> just one
<JPL-Justin> where do you see two?
<Smerdyakov> You use a match against (INT i1, INT i2), which means it only takes nums as input.
<Smerdyakov> Not polymorphic at all.
<JPL-Justin> no
<JPL-Justin> I"m matching (x,y)
<JPL-Justin> with it
<JPL-Justin> not
<JPL-Justin> f
<JPL-Justin> so x,y are nums
<JPL-Justin> i1 = int
<JPL-Justin> i2 = int
<JPL-Justin> f1 = float
<JPL-Justin> f2 = float
<JPL-Justin> so f is always given either two ints, or two floats
<Smerdyakov> Yes, x,y are nums. And x and y are the function's two parameters.
<JPL-Justin> no they aren't!
<Smerdyakov> Therefore, f takes only nums.
<JPL-Justin> look at the code
<JPL-Justin> never ever ever is f given a num
<Smerdyakov> Oops. Misread it.
<JPL-Justin> It's easy to misread :)
<JPL-Justin> hehehe
<JPL-Justin> yeah that'd be weird man
<JPL-Justin> anyways...
<JPL-Justin> I know this doesn't compile because I spent several hours on it
<JPL-Justin> what I think is.... in theory it could typecheck, but the typechecker would have to be more awesome than it already is to handle these cases
<JPL-Justin> and I wish it was but I also wish I had a pony too
<JPL-Justin> so
<Smerdyakov> Yeah, this is because ML only allows impredicative polymorphism.
* JPL-Justin looks it up
<Smerdyakov> Or "let-polymorphism."
* JPL-Justin is still reading about it
<JPL-Justin> so basically this says that
<JPL-Justin> a function whos type is completely determined
<JPL-Justin> must have inputs whose types are completelyd etermined?
<JPL-Justin> hmmmmm
<Smerdyakov> It says that all quantifiers on types must "come first," before regular types.
<Smerdyakov> forall 'a ('a -> 'a) is allowed.
<Smerdyakov> (forall 'a ('a)) -> (forall 'a ('a)) is not.
<JPL-Justin> what is the definition of forall?
<Smerdyakov> Same as in predicate logic, ranging over types
<JPL-Justin> hmmm
<Smerdyakov> That is not OCaml syntax. It's just meant to indicate the idea.
<JPL-Justin> ohhhhhhhh
<JPL-Justin> okay I was like...
<JPL-Justin> I didn't learn about that ocaml operator
<JPL-Justin> :)
<mrsolo> laziness i say
<JPL-Justin> okay I think I see what you're saying
<JPL-Justin> mrsolo: dude
<mrsolo> i don't think it is that hard to learn
<mrsolo> about type system.. not about you
<JPL-Justin> anyways so what I wrote could be done, but they just don't do it right
<JPL-Justin> eh
<JPL-Justin> I htink it's more likely an efficiency thing
<JPL-Justin> it could possibly be really hard to detect this
<JPL-Justin> maybe they should make a special case... hmmm
<Smerdyakov> JPL-Justin, I think it makes type inference undecidable
<JPL-Justin> well
<Smerdyakov> (To allow general polymorphism)
<JPL-Justin> I think in general yess
<JPL-Justin> right
<JPL-Justin> but...
<JPL-Justin> I do not think they allow
<JPL-Justin> all possible cases that are decidable
<JPL-Justin> thus it can be improved :)
<JPL-Justin> but yes I agree with you
<Smerdyakov> I wish you would avoid breaking sentences into multiple lines like that.
<Smerdyakov> You just waste vertical space in the channel log.
<JPL-Justin> Smerdyakov: Okay I will avoid it. Sure. I guess it's a habit from IM. I apologize.
<Maddas> And it makes it a bit harder to follow without interruptions
<Smerdyakov> And there is no sensible definition of what "cases that are decidable" means.
<JPL-Justin> Smerdyakov: a quick script can wrap my messages :)
<JPL-Justin> Smerdyakov: okay, however I think they could still decide the case I showed above, for reasons which are obvious
hf_ has quit [Remote closed the connection]
hf has joined #ocaml
<mrsolo> anywa quirkyness aside.. does ocaml actuall let people develope software 'faster'?
<mrsolo> let say no external modules are needed
<Smerdyakov> mrsolo, yes
<mrsolo> how much faster? some of those characteristics are beginning to creep into other languages (ruby..)
<mrsolo> and of coures python
<Maddas> Many languages have many things that O'Caml has, it's not about one thing in particular
<Smerdyakov> Depends how much faster than what.
<Smerdyakov> Ruby and Python are both dynamically typed, so they are nowhere near the main development benefits of ML.
<JPL-Justin> Personally I can see how ocaml would let people that were very good at it develop certain kinds of software faster
<Maddas> Why only certain kinds and why only people that are very good at it?
<JPL-Justin> The main reason I don't like ocaml is that... it doesn't seem very good for highly numerical stuff :(
<Maddas> (And faster than in what?)
<JPL-Justin> well I should say it differently
<JPL-Justin> the main thing I don't like about ocmal. I'm a fanatic but
<JPL-Justin> that is unforgivable to a numercial guy like me... ahh if only I could ahve the power of ocaml with the numerical stuff of fortran etc... tightly integrated
<mrsolo> benefits aside, software does developed under python fast compares to say java/c# and perl
<JPL-Justin> Only certain kinds of software becuase the libraries aren't there.
<JPL-Justin> Try writing photoshop in Ocaml and tell me how far you get :)
<Maddas> mrsolo: Yes, but Java and C# are not exactly known to let you write software very efficiently
<JPL-Justin> You can do something like that decently (though tnot as fast as C++) in Java or heck even Delphi
<Maddas> JPL-Justin: Uh, only certain kinds of software *need* the speed
<JPL-Justin> I know that
<JPL-Justin> I'm not a speed fanatic
<JPL-Justin> I prefer elegance to speed for everything
<mrsolo> certainly.. so how much faster is ocaml in development compares to python then?
<JPL-Justin> except tight loops
<Smerdyakov> mrsolo, try and see
<Maddas> mrsolo: Certainly one cannot make general statements like that and expect them to be precise.
<JPL-Justin> however anyone doing heavy numerics is probably doing it VERY heavily, in that no amount of speed increase is too much
<shrimpx> mrsolo: ocaml development is about 3.8 times faster
<JPL-Justin> you can just make your results more accurate by bogging it down more, it's rather arbitrary perfectionism
<JPL-Justin> shrimpx: is that metric time or english?
<shrimpx> haha
<JPL-Justin> :)
<mrsolo> ballpark is good.. i can say ;oh i can dvelopment same feature under python 2-3x faster than under java
<JPL-Justin> mrsolo: you ever use jython?
<Maddas> That certainly depends on what you are developing.
<mrsolo> JPL-Justin: nope
<JPL-Justin> Maddas: couldn't agree more
<JPL-Justin> I want to find out what jython is like. I will just have to try it
<mrsolo> discounting all library use.. just on my own application logics
<Maddas> I wouldn't know, I rarely do anything without using any libraries.
<Maddas> (Anything of significant size)
<JPL-Justin> yes well
<JPL-Justin> almost all of what I do involves libraries heavily
<JPL-Justin> because it's not so algorithmically intensive as
<JPL-Justin> specification intensive
<JPL-Justin> it's a little annoying but when you are writing programs to be used by hundreds of thousands of end users
<JPL-Justin> that's waht ends up taking up the bluk of it :(
<JPL-Justin> I wish I didn't have to do any GUI stuff etc
<mrsolo> ah i write customized software.. so i don't have that boredom
<JPL-Justin> great!
<JPL-Justin> well... I'm not exactly bored :)
<mrsolo> i have huge pressure on rapid development though
<JPL-Justin> it's exciting to see how my software is used
<JPL-Justin> and I'm going to get a few papers out of some of it
<JPL-Justin> for instance the mosaic warping algorithm we used
<JPL-Justin> will be good for a paper
<JPL-Justin> well for us it's basically... we'll be on mars in 3 years
<JPL-Justin> it has to be done then
<JPL-Justin> good luck
<JPL-Justin> and then they cut the funding :(
<mrsolo> oh i didn't know what JPL stand for..hah
<JPL-Justin> yeah
<JPL-Justin> I'm just an intern though
<JPL-Justin> so don't judge JPL by me :)
<JPL-Justin> well... I'm a senior now this is my 4th year
<JPL-Justin> and I"m full time at th emoment, on MER
<JPL-Justin> you may have seen some of my work, I'm one of the Maestro guys
<mrsolo> hmm don't know what maestro is...
<JPL-Justin> google it
<JPL-Justin> maestro nasa
<JPL-Justin> it's pretty cool
<JPL-Justin> you should check it out
<JPL-Justin> you can use the same program the scientists use to analyse data from mars
<JPL-Justin> and plan each day
<JPL-Justin> I guess the reason I really like OCAML is that it seems to me that it represents what computer languages will be like in some sense in 10 years
<mrsolo> what kinda of data?
<JPL-Justin> I mean, F# is bringing .NET on it
<mrsolo> how is it analysised?
<JPL-Justin> mostly images but spectral
<JPL-Justin> well you have to analyse it yourself, however it aids you by allowing you to filter images
<JPL-Justin> and change how it does color composites
<JPL-Justin> and it stitches images together into mosaics
<mrsolo> ah
<JPL-Justin> and it coregisters data from more than one observation
<JPL-Justin> blah blah blah it gives you navigational context too etc...
<mrsolo> open source?
<JPL-Justin> it also comes with The Conductor, which walks you through each data set, it's pretty cool
<JPL-Justin> not yet
<JPL-Justin> soooooon :)
<JPL-Justin> just be happy NASA let us release it at all
<JPL-Justin> it has some really awesome stuff under the hood, like the dependency based simulation engine
<mrsolo> i don't see major push in F# though
<JPL-Justin> allows simulation to be computed on the fly much faster
<mrsolo> JPL-Justin: simullation engine?
<JPL-Justin> eh
<mrsolo> nice
<JPL-Justin> simulating resource requirements etc
<mrsolo> nice
<JPL-Justin> nothing like amazing in the snese like "this is realistic"
<JPL-Justin> more like "this is fantastically responsive to real time planning changes OMG"
<mrsolo> so this sofware deal manipulate how much data at once?
<JPL-Justin> uh
<JPL-Justin> here...
<JPL-Justin> well
<JPL-Justin> it stores several gigs in ram
<JPL-Justin> on your computer a little less :)
<JPL-Justin> the total amount of data that it's used on is something like 500 gigs
<JPL-Justin> but not in ram simultaneously
reltuk has joined #ocaml
<JPL-Justin> goodnight everyone
<mrsolo> nite
<JPL-Justin> it's very nice to talk to all of you,e ven if I argue a lot :)
<JPL-Justin> and sorry for all of the vertical spaces. I will work on that.
JPL-Justin is now known as JPL-Justin-away
<reltuk> vertical spaces?
m[kel[ has joined #ocaml
smklsmkl has quit [Read error: 110 (Connection timed out)]
<JPL-Justin-away> reltuk: apparently I use too many lines when I type
<JPL-Justin-away> reltuk: you missed the worst of it :)
<reltuk> haha, good stuff
<JPL-Justin-away> yeah
<JPL-Justin-away> so what do you do with ocaml? I'm about to leave but not yet :)
smklsmkl has joined #ocaml
m[kel[ has quit [Read error: 110 (Connection timed out)]
rox has quit [Operation timed out]
rox has joined #ocaml
JPL-Justin-away has quit [Read error: 60 (Operation timed out)]
JPL-Justin-away has joined #ocaml
Vekza has quit ["Leaving"]
shawn_ has joined #ocaml
bk_ has quit ["I'll be back"]
reltuk has quit [Operation timed out]
phantasma has joined #ocaml
<phantasma> hello
SpookRijder has joined #ocaml
phantasma has quit [Client Quit]
unwarrented has joined #ocaml
<unwarrented> hey everyone
como has joined #ocaml
<unwarrented> hey como
<unwarrented> what up dog
<unwarrented> how do peole choose nicks is it random
<unwarrented> or maybe they write a program to pick the silliest word
<unwarrented> let nick = List.maximum (silly_funct) nameList;;
<unwarrented> hmm...
unwarrented is now known as notslinted
<notslinted> where is everyone
notslinted has left #ocaml []
Alain has joined #ocaml
Alain is now known as Alain_
__DL__ has joined #ocaml
__DL__ has quit [Client Quit]
mrsolo has quit ["Leaving"]
Alain_ has left #ocaml []
Shammah has joined #ocaml
mattam has quit [Read error: 60 (Operation timed out)]
mattam has joined #ocaml
m[kel[ has joined #ocaml
smklsmkl has quit [Read error: 110 (Connection timed out)]
bk_ has joined #ocaml
noss has joined #ocaml
pattern has quit [Read error: 60 (Operation timed out)]
pattern has joined #ocaml
pattern has quit [Excess Flood]
pattern has joined #ocaml
bk_ has quit ["Terminated with extreme prejudice - dircproxy 1.0.5"]
bk_ has joined #ocaml
pattern has quit [Read error: 110 (Connection timed out)]
The-Fixer has quit ["Goodbye"]
__DL__ has joined #ocaml
Lemmih has joined #ocaml
__DL__ has quit [Client Quit]
The-Fixer has joined #ocaml
Shammah has quit ["ChatZilla 0.9.35 [Mozilla rv:1.5/20030925]"]
_nn_ has joined #ocaml
_nn_ has quit ["Trillian (http://www.ceruleanstudios.com)"]
SpookRijder has quit ["Client Exiting"]
pattern has joined #ocaml
housetier has joined #ocaml
skylan has quit [Connection reset by peer]
cjohnson has quit ["Drawn beyond the lines of reason"]
smklsmkl has joined #ocaml
m[kel[ has quit [Read error: 110 (Connection timed out)]
mads- has joined #ocaml
reltuk has joined #ocaml
m[kel[ has joined #ocaml
The-Fixer has quit ["Goodbye"]
smklsmkl has quit [Read error: 110 (Connection timed out)]
The-Fixer has joined #ocaml
vegai has joined #ocaml
reltuk has left #ocaml []
housetier has left #ocaml []
bk_ has quit ["I'll be back"]
bk_ has joined #ocaml
wazze has joined #ocaml
como has quit ["Leaving... mIRC sucks - switch to X-Chat"]
Lemmih has quit ["Download Gaim: http://gaim.sourceforge.net/"]
maihem has joined #ocaml
bk_ has quit ["I'll be back"]
maihem has quit [Read error: 104 (Connection reset by peer)]
maihem has joined #ocaml
maihem has quit [Read error: 104 (Connection reset by peer)]
cjohnson has joined #ocaml
malte has quit [Remote closed the connection]
mads- has quit ["( www.nnscript.de :: NoNameScript 3.81 :: www.XLhost.de )"]
bk_ has joined #ocaml
gl has quit ["changing servers"]
gl has joined #ocaml
JPL-Justin has joined #ocaml
<JPL-Justin> Hey guys
reltuk has joined #ocaml
<reltuk> why does: let reverse = List.fold_left (fun x y -> y :: x) [];; end up with '_a instead of 'a?
<JPL-Justin> hmm
<JPL-Justin> do you know what '_a stands for?
<reltuk> yeah...the type gets constrained after first use
<reltuk> it's not really polymorphic...it normally shows up when you're using references and crap
<JPL-Justin> :(
<JPL-Justin> that sucks
<JPL-Justin> so you're wondering why this isn't fully polymorpic?
<JPL-Justin> hmmm
<reltuk> so reverse [1;2;3] ;; reverse ['a';'b';'c'] <--- this is a typing error
<JPL-Justin> this to me is one of the problems with ocaml, the type system is not aways obvious
<reltuk> it's not non-obvious...it's obviously wrong
<JPL-Justin> I had a similar problem with polymorphic arguments...
<JPL-Justin> like passing (>) or (<=) to a function losing its polymorphism
<JPL-Justin> which sucked
<JPL-Justin> hmmm
<Banana> JPL-Justin: the problem here comes from partial application.
<Banana> if you want polymorphism, just make the last argument explicit.
<Banana> let reverse l = List.fold_left ..... [] l
<Banana> and there you go.
hf has quit [Nick collision from services.]
hf_ has joined #ocaml
<reltuk> interesting...
<JPL-Justin> oh wow
<JPL-Justin> that's pretty cool
<reltuk> and why does partial application constrain types sometimes?
<JPL-Justin> wow Mr. Banana, you sure know your stuff
<JPL-Justin> Because you have to build closures?
<JPL-Justin> I was just about to ask where to find it
<JPL-Justin> thanks
<Banana> I cannot say tha better :)
<Banana> that.
<reltuk> hmm...doesn't seem like that should be necessary...curried polymorphic functions seem useful
<Banana> the function is curried.
<Banana> there is semantic equivalence between the first reverse and the second.
<JPL-Justin> right
<JPL-Justin> which is why
<JPL-Justin> it's weird that the type is different
<JPL-Justin> :(
<reltuk> but there's not, because the types are different
<Banana> it's a problem of typing algorithm
<Banana> there is a simple way to explain that.
* JPL-Justin listens intently
<Banana> when you type an expression , you apply a typing rule according to the "shape" (i don't wanan say type) of the expression.
<Banana> ex : if you want to type a let x = ... in ...
<Banana> then you apply the let typing rule.
<reltuk> yeah, I know how hinley-miller typing works...
<Riastradh> Hindley-Milner.
<Banana> so when you write let reverse = List.fold_left (f...) []
<reltuk> yes, thank you for the n's
<reltuk> ns even
<Banana> when you use the application rule.
<Banana> since it's an application.
<Banana> you apply List.fold_left to 2 arguments = partial application.
<Banana> but if you write let f l = List.fold_left (...) [] l
<Banana> then you apply the lambda typing rule.
<Banana> (you type an abstraction or function definition).
<Banana> so even if it's the same term "in fine" (the generated code is the same :) the typechecker uses 2 differents rules.
<Banana> and the app rules is more restrictive.
<Banana> (it doe'snt allow the argument to be generalised).
<Banana> voilà.
<reltuk> but a true curried function would return anothing function after those two applications, and it would have a fully polymorphic type
* reltuk wonders how haskell does it
* JPL-Justin wants closures in fortran
<Banana> JPL-Justin: don't you have pointers to function in fortran ?
<Banana> (i don't know fortran).
<JPL-Justin> uh
<JPL-Justin> not F77
<JPL-Justin> in fact
<JPL-Justin> there are no pointers AGT ALL in F77
<JPL-Justin> that's why it's so damn fast
<JPL-Justin> yes I still use F77 :(
<reltuk> F77 doesn't have recurssion, does it?
<Banana> well, you have while loops. that's equivalent.
<Banana> (less elegant but...)
<Banana> you have while loops no ????
<Banana> :D
<reltuk> jne is equivalent :-p
<JPL-Justin> you have gotos
<Banana> yeah...
<JPL-Justin> F77 *HAS* recursion
<JPL-Justin> but it's... funky
<JPL-Justin> you can't do direct recursion
<JPL-Justin> like you can do
<JPL-Justin> a calls b
<JPL-Justin> and b calls a
<JPL-Justin> but not a calls a
<JPL-Justin> that's how I implmeneted a program that drew a koch snowflake approximation in raw postscript
<Banana> yeah!!!!
<JPL-Justin> :-D
<JPL-Justin> it was crazy
<JPL-Justin> anyways...
<JPL-Justin> F77 doesn't really have anything
<JPL-Justin> except really really really fast math
<JPL-Justin> and nice arrays
<JPL-Justin> some loops
<JPL-Justin> gotos :(
<JPL-Justin> and almost no abstractino whatsoever
<JPL-Justin> oh and get this
<JPL-Justin> you can put whitespace INSIDE VARIABLE NAMES
<JPL-Justin> like you can write "dogma" as "dog ma"
<JPL-Justin> :(
<JPL-Justin> that's just silly! It's like... wtf
<Banana> i knew about white spaces.
<Riastradh> Numerical constants are mutable.
<Riastradh> _That's_ like...WTF!!?!
<JPL-Justin> :(
<JPL-Justin> wait
<JPL-Justin> are you sure, if you use the parameter keyword?
<Riastradh> Or rather, numbers are mutable.
<JPL-Justin> that's what we use, I'm pretty sure it's not mutable because array lenghts are based on it
<Riastradh> I don't remember the specifics.
<JPL-Justin> yeah that sucks
<JPL-Justin> Oh I remember that because it's pass on heap
<Banana> there is a story about a guy who wrote for i = 1; and instead of a syntax error gets 1 into the variable "for i"
<JPL-Justin> :)
<Banana> (or something similar which fits with F77 syntax).
<JPL-Justin> hehehehhe
<JPL-Justin> oh it does
<JPL-Justin> :)
<JPL-Justin> I want to escape... to a happier place
<JPL-Justin> if OCaml was kickass for numerical computing, I'd use it
<JPL-Justin> however its not :(
<JPL-Justin> that's one of the only things I really dislike about OCaml is
<JPL-Justin> that it's hard to do anything heavily numerical in a sane/efficient way
<Banana> hu hu.
<JPL-Justin> what is that sound?
<JPL-Justin> although I suppose you could just use external C functions for everything...
<Banana> do you read the caml list ?
<JPL-Justin> Hmm... I'd like to see a preprocessor that lets you put inline C code in OCAML
<JPL-Justin> no why
<Banana> some guy posted exactly *** the same question as yours.
<JPL-Justin> what question?
<Banana> about '_a and partial application.
<JPL-Justin> oooooh NML
<JPL-Justin> I never asked that question man
<Banana> who did ?
<JPL-Justin> that was the other guy :)
<JPL-Justin> reltuk
<Banana> oki.
<reltuk> that was me
<JPL-Justin> :)
<JPL-Justin> but I didn't know the answer so now I do :)
<reltuk> he was someone more knowledgable than me
<reltuk> that I asked about it...
<Banana> in fine you got the answer faster ;)
<Banana> but he shouldn't ask like this.
bk_ has quit ["I'll be back"]
<reltuk> *shrug*, I didn't ask him to or anything
<Banana> i'm not blaming you.
<Banana> i see that he was realy bothered by the question.
<Banana> but saying "This, to me,
<Banana> is very wrong behavior.
<Banana> "
<Banana> (for something that is in the faq...)
<Banana> ho well, people on the ML are nice anyway.
<reltuk> it seems wrong...over in #haskell he says it's a restriction because ocaml isn't referentially transparent, so the type system is constrained so only values can be truly polymorphic...
<reltuk> and application isn't a value, even if it returns a function value
<Banana> yes that's true.
<JPL-Justin> i'm confused Banana were you talking about me, saying I shoudn't ask something?
<JPL-Justin> or are yuou taking about someone in another channel?
<Banana> JPL-Justin: no.
<JPL-Justin> ok
<Banana> speaking about someone who posted on the Ocaml mailing list.
<Banana> nothing to do with you sorry.
<Banana> reltuk: function application cannot be referential transparent in Ocaml.
<Banana> because you have side effects.
<Banana> referentially.
<reltuk> right...not even type-referentially transparent, since something like let l = ref [] will change it's type when you use l
<Banana> I wouldn't say change...
<Banana> but that's right.
<reltuk> well...become more percise
<reltuk> additional constraints
<Banana> yes specialise, or instanciate...
<Banana> but i'm just picking over details.
<JPL-Justin> math/computer science is all about pedantic detail picking
<JPL-Justin> :)
<Banana> :)
<gl> cs is bullshit
<gl> that was my day-though
<gl> (nvm, i have to code in C, these days)
<JPL-Justin> ha
<JPL-Justin> at least you get to do C
<Banana> :)
<JPL-Justin> much of what I do is F77
<JPL-Justin> :(
<JPL-Justin> oh well I'm getting away from that... mostly Java now, which isn't so bad
<gl> F77 !!
<gl> :)
<JPL-Justin> yeah
<gl> let me guess, you're physicist ?
<JPL-Justin> yeah
<JPL-Justin> well I'm a PIT
<JPL-Justin> Physicist In Training
<JPL-Justin> in a semester I'll have my BS
<JPL-Justin> in Applied Physics from Cornell
<JPL-Justin> and then I'll get a masters in CS, and then probably a PhD in physics
<gl> what does BS mean ? (I'm really not aware of all these acronyms)
<Riastradh> Bachelor of Science[s?].
<Banana> way to go.
<gl> ok, I don't know the french equivalent to these diploma
<JPL-Justin> yeah
<JPL-Justin> well
<JPL-Justin> it's a 4 year degree
<gl> maybe you're 12 years old. :/
<JPL-Justin> hey now
<Banana> hum...
<JPL-Justin> I'm 22 thank you very much
<JPL-Justin> well don't feel bad I don't know anything about french degrees
<gl> _I_ feel bad :)
<Banana> gl: i know you had a hard day with all those #define but...
<mattam> gl: we all have a hard time with this different degrees
<gl> right
<mattam> its just confusing
<JPL-Justin> yeah honestly
<JPL-Justin> shouldn't t here be like a cheat sheet somewhere that shows how different countries do it?
<JPL-Justin> 4 years of university, and not a very easy one, whatever that woule be in france
<Riastradh> Why don't you make one?
<JPL-Justin> Riastradh: Cause I don't know anything other than the US part
<JPL-Justin> I'll make the US part
<JPL-Justin> :)
<Riastradh> For Russia, graduating from college gives a diplom; after grad school, there's the candidate of science, which is harder than a masters but not as hard as a PhD; after that is a doctor of science, which tends to be even more rigorous than a PhD.
<gl> well, in france it's like this : 18 years old = baccalaureat (to access higher levels as university, engineer schools, etc) named "bac"; bac+3 : grade of licence, bac+5 : master, bac+8 : PhD
<gl> (for us, "college" is 10-14 years old)
<JPL-Justin> hmmm
<JPL-Justin> cool gl :) thanks for filling us in
<JPL-Justin> hehehe
<mattam> there are so nice things on the net: http://www.nsf.gov/sbe/srs/seind02/c7/fig07-22.htm
<mattam> especially the 'Houses can be haunted' rise :)
reltuk has left #ocaml []
housetier has joined #ocaml
avlondono has joined #ocaml
Godeke has joined #ocaml
The-Fixer has quit []
The-Fixer has joined #ocaml
Godeke has quit ["Leaving"]
Godeke has joined #ocaml
smklsmkl has joined #ocaml
reltuk has joined #ocaml
m[kel[ has quit [Read error: 110 (Connection timed out)]
otsuka has joined #ocaml
skylan has joined #ocaml
m[kel[ has joined #ocaml
smklsmkl has quit [Read error: 110 (Connection timed out)]
The-Fixer has quit []
The-Fixer has joined #ocaml
mfurr has joined #ocaml
The-Fixer has quit []
The-Fixer has joined #ocaml
The-Fixer has quit [Client Quit]
magnus-- has joined #ocaml
JPL-Justin has quit [Read error: 104 (Connection reset by peer)]
housetier has quit [Read error: 104 (Connection reset by peer)]
housetier has joined #ocaml
housetier has quit [Read error: 104 (Connection reset by peer)]