19 votes

Koka - A Functional Language with Effect Types and Handlers

5 comments

  1. [2]
    nrktkt
    Link
    This looks cool. I'm wrapping my head around effect handlers. A couple things jump out that I need to think through for use in practice. One is that effects don't seem to indicate if they resume...

    This looks cool. I'm wrapping my head around effect handlers.
    A couple things jump out that I need to think through for use in practice.
    One is that effects don't seem to indicate if they resume or not. I have to think it through, but is it really OK that only the person defining the handler knows if it will resume or short-circuit like an exception?
    Another thing is that handlers are defined at the top of methods, rather than in the body near where methods are invoked. Maybe this is addressed in "Named and Scoped Handlers" but that's still a todo and the link is broken.
    I'm thinking about this mostly in the frame of error handling, but the idea that the same effect system also works for side-effect isolation and async/await is obviously appealing.

    Last thought is that it's nice to see a native-level functional language with reference counting. I'd been thinking about that concept for a long time but I'm nowhere near skilled enough to even try to play with it.

    8 votes
    1. treed
      Link Parent
      I think the handlers being at the top of the method is just convention in the examples. If I read the docs right, the handlers just accept a closure. The with keyword makes the rest of the body a...

      I think the handlers being at the top of the method is just convention in the examples. If I read the docs right, the handlers just accept a closure. The with keyword makes the rest of the body a closure. You could put the with closer to the use site, or even fully enclose a use mid-way through a function body by not using with. (I think)

      1 vote
  2. [3]
    adutchman
    Link
    Thnk you for sharing! I have been thinking about FP and about why FP language implementations are generally slow and why I haven't seen an FP language take a Rust-like approach for compilation. I...

    Thnk you for sharing! I have been thinking about FP and about why FP language implementations are generally slow and why I haven't seen an FP language take a Rust-like approach for compilation. I think this is exactly such a language. I also came up with the idea of FBIP and I am surprised that languages like Fsharp and Ocaml don't do it because it seems so obvious (or do am I wrong about this?). I am still wrapping my head around the effects. One thing I do not understand why exceptions are used instead of using Error type.

    Edit: I think I get it now: there are both exceptions and error types. It seems to me that errors are like Rusts panics and errors are like errors.

    4 votes
    1. [2]
      wirelyre
      Link Parent
      I won't pretend to be an expert but I'll note that, to summarize the paper, basically FBIP and prior work are pretty tricky. Once you decide to generate a modify-in-place control path you've got a...

      I won't pretend to be an expert but I'll note that, to summarize the paper, basically FBIP and prior work are pretty tricky. Once you decide to generate a modify-in-place control path you've got a whole tangle of linear constraints in your compiler IR that simply don't exist in the source language. (Unless they do, in which case you get ATS.)

      Deciding to reuse an allocation at runtime means you have to know it's not going to be accessed later — in Koka that's done through reference counting, but OCaml and F♯ need garbage collectors because they can have self-references. So it's natural to track values starting from their constructors and try to inline functions / beta-reduce to put non-escaping allocations on the stack. Well, OCaml does that and I'm positive F♯ and its JIT do too.

      You can do different stuff with different language semantics. Koka is a squeaky-clean slate with simple values and a precise effect system. It's also a research language. By contrast, the Haskell compiler can get away with murder because of laziness and the fact that it basically has no effects at all.

      5 votes
      1. Moonchild
        Link Parent
        You don't need types nearly so fancy as ats's to model uniqueness—see for example futhark, clean, and rust. (I am not giving an opinion on whether this is a good idea.) I don't really understand...

        You don't need types nearly so fancy as ats's to model uniqueness—see for example futhark, clean, and rust. (I am not giving an opinion on whether this is a good idea.)

        I don't really understand the comment about 'a whole tangle of linear constraints in your compiler IR that simply don't exist in the source language'. For one, you need a 'whole tangle' of interactions and constraints anyway in order to implement many other optimisations, including escape analysis, which you mention. For two, whether an operation is done in-place or not is not something that really matters for other optimisations, so it is not problematic to do it late, as part of instruction selection and representation selection—there are not really phase ordering problems (standard caveats and rebuttals apply). (Contrariwise, deciding early adds unnecessary redundancy to the ir. Consider 'f (Cons x y) = g (Cons x+1 y) (Cons x y+1)'. One of those conses can be 'in place', while the other cannot, but if I decide which one too early then that's now something that I have to care about.)

        Reference counting is a form of garbage collection, and non-naive forms (i.e., not what swift/c++/rust do) can work very well even in languages that admit concurrency and cycles, if combined with mechanisms such as backup tracing to deal with those cycles (see e.g. lxr). In these cases, the reference count provides an upper bound on the number of references to an object, so it can be soundly used to mutate in place. Further, something like backup tracing is probably something you want with any allocation strategy, to deal with fragmentation; see e.g. shipilev on the subject.

        If you never mutate and use bump allocation, then pointers can only point backwards in the heap, which is a useful property that a tracing collector can take advantage of. You lose this if the implementation mutates opportunistically. Whether this is a worthwhile trade is not obvious (there is a reason I own https://performanceisworkloaddependent.com/...)

        I don't really buy the 'ghc=magic' or 'lazy=fast' memes. I opined on the matter somewhat here.

        2 votes