boru` has joined #forth
boru has quit [Disconnected by services]
boru` is now known as boru
<siraben> alexshpilkin: Right, you can consider EXECUTE like that. The type to me would be unclear due to variable arity.
<siraben> Yes, I'm also a big fan of Haskell. I also like Coq, although it does take some getting used to.
tabemann has joined #forth
jsoft has joined #forth
* tabemann hates how dead this channel's become
proteus-guy has joined #forth
proteus-guy has quit [Ping timeout: 260 seconds]
proteus-guy has joined #forth
proteus-guy has quit [Ping timeout: 260 seconds]
proteus-guy has joined #forth
WickedShell has quit [Remote host closed the connection]
<`presiden> siraben, "I also like Coq" sounds lewd when spoken out loud
proteus-guy has quit [Ping timeout: 260 seconds]
<siraben> `presiden: *sigh*, if I got a dollar every time someone has made a Coq joke
<siraben> There's actually good reasons for the naming, named after Thierrey Coquand, plus the Calculus of Constructions, and "coq" means rooster in French which is where INRIA is based.
<`presiden> siraben, :P
jsoft has quit [Ping timeout: 256 seconds]
proteus-guy has joined #forth
proteus-guy has quit [Ping timeout: 244 seconds]
gravicappa has joined #forth
jsoft has joined #forth
_whitelogger has joined #forth
shynoob has joined #forth
<remexre> siraben: tried idris or agda? I like coq for proofs, but can't stand it for program development; idris is a lot more practical than those
shynoob has quit [Remote host closed the connection]
shynoob has joined #forth
<shynoob> is the yahoo group for win32forth still active?
pareidolia has quit [Ping timeout: 244 seconds]
elioat has quit [Quit: The Lounge - https://thelounge.chat]
elioat has joined #forth
<siraben> remexre: I tried Agda, (plfa.github.io), it seems great but proving isn't as ergonomic as in Coq, and it uses unicode way too much.
<siraben> I heard good things about Idris, might try it since it's possible the most developed in terms of trying to bring dependent types in a practical way.
<siraben> s/possible/possibly
<remexre> yeah, the unicode irks me too
<remexre> I haven't done PLFA, but I've done a course on type theory in it, and I'm using it for my own research; I actually prefer typed holes to coq's prover, mostly 'cause after about the same amount of experience in coq, I still can't figure out how to phrase certain transformations in the tactic language
<remexre> (plus, Prop feels "dirty" to me, though I don't like proof irrelevance much as a whole)
<siraben> ?remexre: I see. I like the holes feature too, yes. What sort of research are you doing
<siraben> ?*
<remexre> siraben: kinda same stuff as idris; building a PL with dependent types, trying to make it both human-usable for general programs, while also making it easy to do machine-aided proofs
<siraben> remexre: What do you make of proof irrelevance. IIRC it's just that two proofs can be considered "equivalent" (in a homotopy type theory sense) if they witness the same proposition, right?
<remexre> I'm basing it on cubical type theory, unlike idris, though
<remexre> yeah
<siraben> Ah, as I understand it cubical type theory lets you do homotopy type theory with computational content (without the equivalence axiom)?
<siraben> nonconstructive equivalence axiom*
<remexre> univalence axiom, yeah
<siraben> Yes.
<siraben> I saw this talk a few months ago; https://www.youtube.com/watch?v=AZ8wMIar-_c
<siraben> Interesting that intensional type theory doesn't allow us to prove that ReaderT Maybe isn't a functor, or streams.
<siraben> or that streams are functors*
<siraben> When I started trying to do things with Coq, it was very clunky and I never properly learned the more advanced techniques like Setoids.
<siraben> with streams in Coq*
<remexre> that's just function extensionality, right? cubical type theory gives you function extensionality, as a provable theorem
<siraben> Oh, that's amazing. How?
<remexre> \(f g : A -> B)(p : forall (x : A), f x = g x) -> \(i : Dim)(x : A) -> p x i
<siraben> You'll have to explain what happens after the second arrow, hm.
<remexre> er, did the talk cover paths and intervals/dims
<siraben> What does it mean to produce \(i : Dim)(x : A) -> p x i
<siraben> Yeah, I think it did, I haven't studied cubical type theory, though.
<siraben> Do you need to know homotopy theory for HoTT?
<siraben> I have yet to take an undergraduate course on topology.
<remexre> not really; I don't know the "actual math" very well, heh
<remexre> okay, so the values of (A = B), where A and B are of type T, are functions from a Dim (0 to 1 interval) to T
<siraben> [0,1] as in a real interveal?
<remexre> such that (for p : A = B), (p 0) evaluates to A, (p 1) evaluates to B
<remexre> yeah
<siraben> Ok, so that's the "homotopy" part.
<siraben> But T is discrete, no?
<siraben> I guess, homotopy equivalent to [0,1] for all that matters.
<remexre> yeah, in the implementation you normally just carry around little trees of expressions for the [0, 1] interval; and yeah it's up to homotopy equivalence
<siraben> So what happens when I do something like p 0.5?
<siraben> Is it defined there?
<remexre> yes in a math sense; actual on-a-computer implementations won't let you write 0.5 though
<siraben> Heh, ok.
<siraben> What's (i : Dim) then
<siraben> imThe type D
<remexre> normally you have 0, 1, max(_, _), min(_, _), and (1 - _)
<siraben> The type Dim (darn trackpad)
<remexre> that's the [0, 1] type
<siraben> I have those for what?
<remexre> I think Agda writes that as I
<remexre> er you have those for the Dim type
jsoft has quit [Ping timeout: 240 seconds]
<siraben> Is it alright to think of the [0, 1] interval as a restriction of \mathbb{R}?
<siraben> It's isomorphic to R
<remexre> yes, if you keep in mind that because of homotopy equivalence, points other than the endpoints can move around
<remexre> like there's nothing guaranteeing what's at 0.5
<siraben> Of course.
<siraben> Ok, so I have this proposition (p : forall (x : A), f x = g x), then I pass x to it, then I pass the witness i for the interval Dim to complete the proof for extensionality?
<remexre> yeah
<siraben> Ah, and it typechecks because (f x = g x) : Dim -> T, and we pass Dim to it, nice.
<remexre> and since (\(i : Dim)(x : A) -> p x i) 0 = (\x -> (_ : f x = g x) 0) = (\x -> f x) = f
<remexre> and ditto for g
<siraben> Wow.
<remexre> yeah, it's super nice
<siraben> Ok, why does the talk mention infinite objects such as streams as well, and how cubical types can avoid the problem of custom equivalence relations (and nasty setoid hacks)?
<siraben> Wait extensionality would solve equality of streams easily, since that becomes equality of functions (in the case that a stream is a function from the natural numbers to values of a type)
<remexre> yeah
<remexre> if stream is defined as a datatype, that's probably higher inductive types; basically, you can add extra equalities to the structure of the type
<siraben> Then I can use proof for equality to substitute one stream for another stream, nice.
<remexre> and those are in hott without cubical type theory
<siraben> yeah, what's higher inductive types?
<remexre> I can specify extra equalities, e.g.
<remexre> data WeirdList a where
<remexre> Nil : WeirdList a
<remexre> ConsLeft : a -> WeirdList a -> WeirdList a
<remexre> ConsRight : WeirdList a -> a -> WeirdList a
<remexre> WeirdEq : (ConsLeft x Nil) = (ConsRight Nil x)
<siraben> How does WeirdEq in this GADT notation?
<remexre> in a cubical system, it's like you added a constructor WeirdEq : (i : Dim) -> WeirdList a
<remexre> with the extra rules, WeirdEq 0 = (ConsLeft x Nil), WeirdEq 1 = (ConsRight Nil x)
<siraben> Oh, I see.
<siraben> I need to play around with these system to really understand them. What did you use to learn cubical type theory?
<remexre> I took a class on it, taught in cubical agda
<siraben> Nice, that certainly helps.
<remexre> it was last semester so the second half was online (covid); lemme find the lectures, they're on youtube
<siraben> I don't think I have a class at my university on cubical type theory, heh.
<remexre> yeah, this was a special one; we've got a class that's just "a prof wanted to teach a thing"
<siraben> Right, like seminars.
<remexre> yeah
<siraben> There's a category theory one in the fall, but alas I don't have the prerequisites for it.
<siraben> Nice, I like these visualizations already!
<remexre> you might be able to convince the prof to let you take it anyway; we de facto don't enforce prereqs 'cause students will just ask the prof anyway
<siraben> If/when cubical type theory goes more mainstream, imagine all the bad analogies à la what we see with monads right now.
<remexre> lol, I've just been waiting for little cartoon animals walking along paths
<siraben> Yeah, or I could sit in, heh. I did that for a logic class last year.
<siraben> It was classical logic. Very relevant for PL people because logicians also care about syntax, semantics, normalization, equivalence, and so on, just as we do with programs.
<siraben> remexre: Are you a graduate student?
<remexre> yeah, I've been meaning to take ours, but the philosophy department sends a lot of their students to it too, so you have to be super-quick to register for it
<remexre> I will be in the fall; it's at the same institution I did my undergrad at, I took a gap year where I still did classes
<remexre> so I didn't really stop being a student, I just picked up a job too, heh
<siraben> Ah, nice.
<siraben> I think I saw you on #forth some time ago when you mentioned that functions are exponentials in a closed cartesian category, heh.
<remexre> oh, did you recommend the squiggol book?
<remexre> I think I remember this too
<siraben> Yes. I've made some more progress since, actually. It's great.
<remexre> algebra of programming, that is
<remexre> nice
<siraben> I'm up to the last two chapters, took a little break from it and hopefully my professor and I meet regularly again to discuss it.
<siraben> There's also another book that I came across, a professor in Portugal wrote it initially as notes for people to understand AoP, but it covers more which is nice.
<siraben> The explanation of F-algebras and catamorphisms is so natural, probably my favorite so far.
<remexre> I'll add that to my reading group's "books we should read list" too; we're still doing software foundations, just started the second book (will probably switch books at the end of it)
<siraben> Oh, that's great.
<siraben> I found the second book quite difficult, might need to revisit it again.
<siraben> In particular the custom syntax made it unclear how to proceed with a proof, do I unfold the definition, or perform inversion, etc.
<remexre> yeah, that's something I really hate about coq
<remexre> at least in lisp, forth, etc.'s custom syntax, I can tell "what's the thing I need to search for to find the definition of this"
<siraben> Does Agda make this problem any better? They still have mixfix to obscure things.
<siraben> ^searching Agda is a nightmare sometimes
<remexre> eh, it's not much better, but I see much less overuse of weird syntax
<siraben> Right, you know exactly what to search for in lisp and forth.
<siraben> Haven't used Idris, but it doesn't do any of that unicode stuff right?
<remexre> I think it might support it, but the standard library is all ascii
<siraben> That's a relief.
<siraben> I recently wanted to follow along a linear algebra course I took with Coq, but it rapidly was evident that pen and paper reasoning is faster and easier to check.
<siraben> Maybe Lean is designed better for the working mathematician?
<remexre> maybe, I haven't tried it; I kinda suspect formalization is gonna be slower than the "normal" sorts of handwaving one does no matter what
<remexre> though that's something I want to look at in my language; I'm hoping that the user can "sketch out a proof," and the machine can fill in the gaps
<siraben> Right. Relatedly I've seen some layman explanations of HoTT and they seem to imply that it will allow mechanized reasoning in a similar way as to how mathematicians draw analogies between fields, how accurate is that claim?
<siraben> That sounds useful.
<remexre> ideally I'd like to be able to take notes in formal notation while following along with a lecture in realtime, and in the background, the proving engine tries to work through it; then after, I help it with whatever it got stuck on
<remexre> it does make reasoning with isomorphic things a lot easier; I'm not sure how much that helps automated proving, though
<remexre> well, I guess it makes some things only as hard as automated program construction
<siraben> I have a feeling that Isabelle does this thing quite well. If you look in a book like Concrete Semantics, the proofs are more declarative than their equivalent Coq or Agda proofs.
<siraben> But also, Isabelle uses a Higher Order Logic, so that may be one reason why they can use such powerful automation.
gravicappa has quit [Ping timeout: 256 seconds]
<siraben> See the proof for irrationality of square root of 2 on https://en.wikipedia.org/wiki/Isabelle_(proof_assistant) , for instance.
<remexre> huh, that is more readable, though I feel like the "hence"s help a lot
<remexre> (and agda has those, as the ≡-Reasoning, if I understand what they're doing correctly)
<siraben> There's the infamous sledgehammer tactic whereby Isabelle calls external theorem provers too.
<siraben> Right.
<siraben> Did the Agda parser seem fussy to you, by the way?
<siraben> It's very sensitive to whitespace.
<remexre> yeah, like tuples have to be (1 , 2) instead of (1, 2)
<siraben> < 1 , 2 > even, right?
<remexre> er yeah for real tuples, I usually use sigmas for everything, heh
<siraben> Is there a lot of people doing PL research or related topics at your university?
<siraben> You mentioned the reading group, so that's quite a few.
<remexre> a dozenish including faculty, I think
<remexre> er, maybe a few more, definitely less than 20 though
<remexre> the reading group is usually 5-10 people, depending on the book
<siraben> Ah, sounds good.
<siraben> remexre: On a more Forth-related note, you might be interested in slides from a talk I gave recently on an assembler for the EVM. https://github.com/ActorForth/evm-assembler/blob/master/docs/evm-assembler-talk.pdf
<remexre> that's ethereum vm? neat, will give it a look
<siraben> Yeah, I've been working with proteusguy on smart contracts on Ethereum and more recently, Bitcoin.
<siraben> There's been some academic activity on Ethereum smart contracts, their construction, verification, etc. which is nice because of infamous attacks caused by inadequate languages people currently use to write them.
<siraben> I was interested in how Facebook's Libra would use linear types in their blockchain, but looks like it's stagnant for now because many companies pulled out.
<remexre> yeah, libra seemed kinda... evil
<siraben> Come to think of it, a lot of prominent Haskellers are working in the cryptocurrency space. Philip Wadler is working at IOHK on Plutus, which is based on System F omega.
<remexre> I haven't looked too much into any of the blockchain things other than noting "huh, program validation stuff from $dayjob could be useful"
<siraben> Yeah. I like the PL aspect of blockchain, not an expert on economics or anything to comment on the viability of it to replace the current financial systems.
<remexre> yeah, I'm somewhat suspicious of cryptoanarchist stuff
<siraben> Have you heard that GHC merged linear types?
<siraben> Coming in 8.12.
<remexre> like yes governments do bad things, but much of existing financial regulation is not evil?
<remexre> yeah, saw that; haven't had a chance to play with them though
<siraben> Yeah, so there's that whole debate.
<siraben> GHC Haskell is a dialect of Haskell at this point, IMO. Almost every useful program I see uses language extensions in one way or another.
<remexre> I'm not really sure what I'd use them for in haskell, though, since I don't think the optimizer knows about them at all
<remexre> heh, yeah, I saw a quote recently
<remexre> "In this talk I'll be using Haskell 98,
<remexre> i.e. Haskell with 98 language extensions enabled"
<siraben> Haha
<siraben> I think the main purpose was for resource-aware computation. Currently there's all sorts of solutions to handle resources, for instance ConT, or resource libraries etc.
<siraben> ContT*
<remexre> oh, sure
<remexre> reading the evm slides now; 24kb is a weird number...
<siraben> Haha, yeah. The exact number is 24576.
<siraben> Which is 6000 in base 16
<remexre> weird they wouldn't go to 32 or 64; do they use the length field for something else when it starts with 0b1110 or something (though that's terrible)?
shynoob has quit [Ping timeout: 245 seconds]
<siraben> Yeah, it's weird.
<siraben> It's certainly not final. Not sure how I feel about the limitation of 16 call depth. But smart contracts should be fairly constrained.
<remexre> just wait until someone ports twitch plays pokemon to a smart contract
<siraben> I wanted to achieve a uniform and relatively cheap calling convention that separates the regular stack from the call stack, so that super efficient code can be written.
<remexre> yeah, makes sense
<siraben> Vyper and Solidity are truly messes of languages.
<siraben> Here's a great example: I wrote a small program that, when compiled, blew up to over 55kb! Then, after two compiler fixes, it went down to 1644 bytes.
<siraben> Any reasonable compiler would roll their loops, but Vyper seemingly unrolls them to the max.
<remexre> huh, is that to avoid having to analyze recursion?
<remexre> have you seen https://github.com/zetzit/zz ?
<remexre> C-like lang built for automatic reasoning
<siraben> Oh, I should say that it's not due to program loops. Rather, they decided to do some sort of procedure for every element in an array. That program has a 1000 length array, so the check is done for every element.
<siraben> And the check is unrolled
<siraben> Wow, haven't seen that before.
<siraben> Yeah, how do languages like that work? They don't implement full-blown deptypes and instead employ solvers to check properties statically.
<siraben> Symbolic execution?
<remexre> basically you make a VM, but instead of e.g. u8 memory[65536], you do Expr memory[65536]
<remexre> where Expr is an SMT expresssion for a u8 value
<remexre> the as you execute, you're building up bigger and bigger constraints
pareidolia has joined #forth
<remexre> then you can toss them in an SMT solver, and, if all goes well, it can tell you whether the program can complete or not
<siraben> The SMT expression constrains the values of the array, yes?
<remexre> yeah, like memory [0] might equal (if memory[1] > memory[2] then 5 else memory[1])
<siraben> Ah, then can the SMT also tell you if the program may not finish in that state?
<remexre> yeah
<siraben> That's very cool. Recently, hevm (the haskell implementation of the EVM) added symbolic execution. https://fv.ethereum.org/2020/07/28/symbolic-hevm-release/
<remexre> and you can do queries while you're building up constraints (e.g. before you index an array, check "is the index out of bounds?")
<siraben> Which is big step up for formal methods on the blockchain
<remexre> ah, nice!
<siraben> Oh yes, I was reading a paper on bounds checking that used refinement types.
litvinof has joined #forth
<siraben> Similar idea, I suppose.
<siraben> Looks like the way hevm did it is relatively lightweight for novice programmers, a simple assertion is all that is needed for symbolic execution to go ahead.
<siraben> Considering the number sizes are like 2^256, it's impossible to enumerate them in a reasonable amount of time.
<remexre> yeah, smt solvers are really magical
* siraben hasn't written one yet
<remexre> me neither, and I don't really wanna; my understanding is that they're (usually) a huge pile of often-gross heuristics, often on top of a sat solver
<remexre> which then itself is another huge pile of heuristics
<siraben> There's a little toy implementation I wrote of how HM works in a concatenative language https://github.com/ActorForth/ActorForth/blob/haskell/forth.hs
<siraben> Yeah, because SAT solving is NP-complete.
<siraben> DPLL, constraint propagation are popular techniques IIRC
proteus-guy has joined #forth
<siraben> Even Algorithm J in the Hindley-Milner system can take exponential time for pathological expressions, at least it's not undecidable.
<remexre> yeah, I'm not looking forward to when I have to do higher-order unification for my language; I've mostly been working on the middle-back-end (after type inference+unification, before codegen)
<remexre> in your HM, are (:-) argument types?
<siraben> Basically cons at the type level
<siraben> Yeah, I seriously want to implement dependent types, checking first then maybe bidirectional type inference
<remexre> hm, so would a bool-dup have type (T "Bool" :- T "Bool" :- TVar "a") :-> TVar "a" ?
<siraben> Have you seen Ben Lynn's blog posts? They're a gem. Here's one where he implements type operators; https://crypto.stanford.edu/~blynn/lambda/typo.html
<siraben> Bool dup would be, T "Bool" :- TVar "a" :-> T "Bool :- T "Bool" :- TVar "a"
<remexre> er wait yeah I got that really wrong
<siraben> I should say, x :- y is like 9x,y)
<siraben> (x,y)
<remexre> sorry, trying to have like 3 convos at once lol
<siraben> Haha
<siraben> You have a repo where you implement a dependently typed Lisp with algebraic effects?
<remexre> I do bidirectional typing in the typechecker I'm doing now; the fuller unification inference I want would be so I could write e.g. id False or id _ False instead of needing to do id Bool False
<siraben> Right.
<siraben> What's the guide to doing bidirectional type inference?
<remexre> but the bidi inference is not much harder than checking on fully annotated terms; https://davidchristiansen.dk/tutorials/nbe/ is a great reference
<siraben> I guess I need to look into what bidirectional means in this context in the first place.
<remexre> bidirection is just, you have both
<remexre> checkType :: Term -> Type -> m ()
<remexre> and
<siraben> Thanks, looks great.
<remexre> inferType :: Term -> m Type
<remexre> and they're mutually recursive
<siraben> Oh, David Christiansen also wrote the Little Typer IIRC
<remexre> yep, the last chapter here is a case study w/ a subset of the little typer's lang
<remexre> for my lang, the algebraic effects are shelved for the moment, I haven't been able to figure out how to do them such that I can do backtracking, but can't do an infinite loop, without adding linear types
<siraben> Great, the PDF looks very comprehensive.
<remexre> repo is https://git.sr.ht/~remexre/stahlos though; the exp and src/compiler-bootstrap dirs are the relevant ones for the lang
<remexre> I've been working on the ocaml branch for the last couple of days; right now, I'm trying to do datatype declarations in such a way that they're not really "wired in"
<siraben> What do algebraic effects have to do with backtracking and linear types?
<remexre> but defining the eliminators (like the ind-* things from the little typer) is weird and annoying then
<remexre> backtracking is an effect I might want to have
<remexre> and linearly typing the continuation the effect handler is passed makes infinite loops harder (impossible, I think, but not sure) to write
<siraben> Ah I see.
<siraben> Logic errors become type errors!
<siraben> I never thought that linear types could solve the problem of detecting loops in a backtracking effect, wow.
<siraben> Is the plan to write an OS in this language?
<remexre> yeah, with everything that can't be written in it written in Forth instead
<remexre> which is what the src/kernel-aarch64 directory is
<remexre> right now that's at the point where it can boot on real hardware, allocate memory, run a repl
<siraben> Fascinating.
<remexre> but the repl spinloops on the serial port instead of using interrupts
<siraben> There's also a proof directory on the GitHub repo, have you done much proving in Lean?
<remexre> basically none
<remexre> though now that I'm looking through that, there's a lot more there than I remember writing
<siraben> What would stahl be used for in the OS?
<siraben> Userspace programs?
<remexre> so I'm not really having a strong userspace/kernelspace distinction, other than that stahl programs can't do pointery things, the compiler automatically adds preemption points, etc
<remexre> the answer is "as much as possible," I guess
<remexre> like even for device drivers, I'm going to try to have Forth run the interrupt handlers, but just set a flag for Stahl code to respond to
<remexre> (and schedule the relevant driver)
<siraben> I see. Why make it a dependently typed language with effects/
<siraben> ?
<remexre> dependent types for increased expressivity, and to allow proving the "really important parts"
<remexre> the effects aren't really baked in at all
<remexre> and I need some way to do them
<remexre> and algebraic effects seem like the least bad way
<remexre> aren't really baked in at all in the current design*
<remexre> I'm probably gonna reimplement https://hackage.haskell.org/package/fused-effects once I have something typeclass-lookin'
<siraben> Yeah, effects are great, and it seems like using them to express what a program can and cannot do at the OS level is a novel idea.
Zarutian_HTC has quit [Ping timeout: 256 seconds]
X-Scale` has joined #forth
X-Scale has quit [Ping timeout: 260 seconds]
X-Scale` is now known as X-Scale
proteus-guy has quit [Ping timeout: 260 seconds]
proteus-guy has joined #forth
WickedShell has joined #forth
tolja has quit [Ping timeout: 264 seconds]
tolja has joined #forth
gravicappa has joined #forth
xek has joined #forth
proteus-guy has quit [Ping timeout: 260 seconds]
proteus-guy has joined #forth
proteus-guy has quit [Ping timeout: 256 seconds]
cox has joined #forth
cox has quit [Client Quit]
xek has quit [Quit: Leaving]
gravicappa has quit [Ping timeout: 246 seconds]
Zarutian_HTC has joined #forth
mark4 has joined #forth
jsoft has joined #forth