Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see two downsides. Looking at this snippet:

    my_function (): Unit can AllErrors =
      x = LibraryA.foo ()
      y = LibraryB.bar ()
The first thing to note is that there is no indication that foo or bar can fail. You have to lookup their type signature (or at least hover over them in your IDE) to discover that these calls might invoke an error handler.

The second thing to note is that, once you ascertain that foo and bar can fail, how do you find the code that will run when they do fail? You would have to traverse the callstack upwards until you find a 'with' expression, then descend into the handler. And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

I do think this is a really neat concept, but I have major reservations about the readability/debuggability of the resulting code.




> how do you find the code that will run when they do fail?

That's part of the point: it's dynamic code injection. You can use shallow- or deep-binding strategies for implementing this just as with any dynamic feature. Dynamic means just that bindings are introduced by call frames of callers or callers of callers, etc., so yes, notionally you have to traverse the stack.

> And this cannot be done statically (i.e. your IDE can't jump to the definition),

Correct, because this is a _dynamic_ feature.

However, you are expected not to care. Why? Because you're writing pure code but for the effects it invokes, but those effects could be pure or impure depending on context. Thus your code can be used in prod and hooked up to a mock for testing, where the mock simply interposes effects other than real IO effects.

It's just dependency injection.

You can do this with plain old monads too you know, and that's a much more static feature, but you still need to look way up the call stack to find where _the_ monad you're using might actually be instantiated.

In other words, you get some benefits from these techniques, but you also pay a price. And the price and the benefit are two sides of the same coin: you get to do code injection that lets you do testing and sandboxing, but it becomes less obvious what might be going on.


> However, you are expected not to care. Why? Because you're writing pure code but for the effects it invokes, but those effects could be pure or impure depending on context.

Does that work out in practice? Genuinely curious if anyone has experience with such systems at scale or in legacy applications where the original authors left long ago. I'm skeptical because in my experience not everything is or can be designed perfectly pure without abstraction leakage. At some point you need to understand all the behavior of a certain sub-system and the less clever the design is the easier that becomes in my experience.


It's not that different from passing a callback handler, or the litany of other dynamic features. Like, this is routinely done in C to Java to JS everywhere, simply because you need dynamicism in the vast majority of problems.


I wrote my bachelors thesis about IDE support for lexical effects and handlers: https://se.cs.uni-tuebingen.de/teaching/thesis/2021/11/01/ID...

All of what you state is very doable.


> [...] how do you find the code that will run when they do fail? You would have to traverse [...]

I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation; some even do it for DTOs. "Go to implementation" of a method, and you end up in the interface's declaration so you have to jump through additional hoops to get to it. And you're out of luck when the implementation is in another assembly. The IDE _could_ decompile it if it were a direct reference, but it can't find it for you. When you're out of luck, you have to debug and step into it.

But this brings me to dependency injection containers. More powerful ones (e.g., Autofac) can establish hierarchical scopes, where new scopes can (re)define registrations; similar to LISP's dynamically scoped variables. What a service resolves to at run-time depends on the current DI scope hierarchy.

Which brings me to the point: I've realized that effects can be simulated to some degree by injecting an instance of `ISomeEffectHandler` into a class/method and invoking methods on it to cause the effect. How the effect is handled is determined by the current DI registration of `ISomeEffectHandler`, which can be varied dynamically throughout the program.

So instead of writing

    void DoSomething(...) {
        throw SomeException(...);
    }
you establish an error protocol through interface `IErrorConditions` and write

    void DoSomething(IErrorConditions ec, ...) {
        ec.Report(...);
    }
(Alternately, inject it as a class member.) Now, the currently installed implementation of `IErrorConditions` can throw, log, or whatever. I haven't fully pursued this line of though with stuff like `yield`.


The annoyance is that the .NET standard library already does this precise thing, but haphazardly and in far fewer places than ideal.

ILogger and IProgress<T> comes to mind immediately, but IMemoryCache too if you squint at it. It literally just "sets" and "gets" a dictionary of values, which makes it a "state" effect. TimeProvider might be considered an algebraic effect also.


> I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation

I work on a Java backend that is similar to what you're describing, but Intellij IDEA is smart enough to notice there is exactly one non-test implementation and bring me to its source code.


not that familiar with java, but in .net when you do this, it is very common for the implementation to be in a separate assembly, part of a different project


Doesn’t that imply an interface is necessary though, so you can compile (and potentially release) the components separately? I don’t use .net but this sounds quite similar to pulling things into separate crates in Rust or different compilation units in C, which is frequently good practice.


Definitely that could imply the necessity of an interface, but often it's simply done, because everyone working in a project blindly follows an already established poor convention.


by "it's common in the .Net world" I mean that it seems to be an antipattern that's blindly followed. If there's only ever one implementation, it is the interface, imo


> you end up in the interface's declaration so you have to jump through additional hoops to get to it

Bit of a tangent, but this really annoys me when I work on TypeScript (which isn't all that often, so maybe there's some trick I'm missing)—clicking through to check out the definition of a library function very often just takes me to a .d.ts file full of type definitions, even if the library is written in TypeScript to begin with.

In an ideal world I probably shouldn't really need to care how a library function is implemented, but the world is far from ideal.


> The first thing to note is that there is no indication that foo or bar can fail

I think this is a part of the point: we are able to simply write direct style, and not worry at all about the effectual context.

> how do you find the code that will run when they do fail

AFAIU, this is also the point: you are able to abstract away from any particular implementation of how the effects are handled. The code that will when they fail is determined later, whenever you decide how you want to run it. Just as, in `f : g:(A -> B) -> t(A) -> B` there is no way to find "the" code that will run when `g` is executed, because we are abstracting over any particular implementation of `g`.


In languages like JavaScript, function calls that can throw are completely indistinguishable. In Go, calling a function that can fail is explicit and takes three lines of boilerplate, if you just want to propagate the error. That seems like too much. Rust has the ‘?’ operator, which is one character of boilerplate.

Though it does add noise, one character of boilerplate to indicate a function call that uses effects seems like the right amount? Functions that use lots of effects will likely have this character on every function call, but that seems like a good indicator that it’s tricky code.


Rust and Go errors are not really effectful in the "side effect" sense, they're just an ordinary return value of a function. There's really no difference between returning a result and any other enum. Otoh, panics are effectful, and that's the kind of thing you'd want to capture in an effect system.


Sure, but in a language with an effect system, it seems like effects would be used for errors, so it seems worth comparing error-handling techniques.

Go uses a single base type (interface) for “expected” errors and panics for errors that aren’t normally caught. I suppose those would be two different effects? For the “expected” errors, some kind of annotation on the function call seems useful.


The way I see it, effects would be implemented/assigned to the function where the error gets logged for instance. But as long as the error value remains local, a function can still be pure all else being equal.

This is not the case with exception looking code (aka panics) since it escapes normal control flow and I guess makes an error "punch" through stacks as they are unwinded.

A bit like being thrown toward global state. So if we consider panics as side effectful operations, we would have to change go to explicitly declare panics as side effects with a known function signature.

I guess one would want the list of side effects to be part of a function signature for full visibility. I wonder how that would influence backward compatibility.


It looks like exceptions (write the happy path in direct style, etc), but with exceptions, there is a `catch`. You can look for it and see the alternate path.

What might be a good way to find / navigate to the effectual context quickly? Should we just expect an IDE / LSP color it differently, or something?


There's a `catch` with effects as well, though, the effect handler. And it works very similarly to `catch` in that it's not local to the function, but happens somewhere in the calling code. So if you're looking at a function and you want to know how that function's exceptions get handled, you need to look at the calling code.


This is the case anytime errors are propagated to callers instead of handled locally, which is probably most cases.


Just to clarify because the grammar is a bit ambiguous in your comment: which case do you see as the common case? I suspect in most cases, people don't handle errors where they are thrown, but rather a couple of layers up from that point.


> I suspect in most cases, people don't handle errors where they are thrown, but rather a couple of layers up from that

Agreed, that's the point I was trying to make.


> And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

I believe this can be done statically (that's one of the key points of algebraic effects). It works work essentially the same as "jump to caller", where your ide would give you a selection of options, and you can find which caller/handler is the one you're interested in.


This suggests a novel plausible IDE feature: list callers of a function, but only those with a handler for a certain effect (maybe when the context sensitive command is invoked on the name of one of the effects rather than on the name of the function).


What do you mean by "only those"? Each caller of an effectful function will have to handle the effect somehow, possibly by propagating it further up the chain. Is that what you meant, an IDE feature to jump to the ((grand-)grand-)parent callers defining the effect handler?


> there is no indication that foo or bar can fail

Sounds like you're just criticizing try-catch style error handling, rather than criticizing algebraic effects specifically.

Which, I mean, is perfectly fair to not like this sort of error handling (lack of callsite indication that an exception can be raised). But it's not really a step backward from a vast majority of programming languages. And there are some definite upsides to it as well.


Since effects are powerful enough to implement generators and cooperative multitasking, it seems like it’s more than just exceptions? Calling some functions could task-switch and do arbitrary computation before returning at some arbitrary time later. It might be nice to know which function calls could do that.

I’m not a fan of how ‘await’ works in JavaScript because accidentally leaving it out causes subtle bugs. But the basic idea that some function calls are simple and return immediately and others are not makes sense.


Even regular sync functions in JavaScript can do lots of computations that take a long time. And others perform effects like starting a timer or manipulating the DOM. Should these be indicated with keywords too?

I agree that await is a nice hint in the code that something more substantial is happening, but ultimately it’s limited and quite opaque.

Great IDE support for algebraic effects would probably include inline hints to show effects.


In a language with an effect system, presumably modifying the DOM would be an effect.

Maybe some effects should have call sites annotated, like ‘async’, and others should not, like DOM manipulation? It seems like an API design issue. What do you want to call out for extra attention?


I think the readability problem can be solved by having your LSP tell your editor to display some virtual text, indicating that the foo and bar calls might error.

I have to admit I don't understand the second point. If you could statically determine from the definition of foo and bar what code handles their errors, than there would be no reason for foo or bar to error, they could just call the error handling code. If foo and bar return Result sum types and my_function just passes those errors up, it would be no different. You don't know what callers of my_function would do with those errors.


> no indication that foo or bar can fail ... how do you find the code that will run when they do fail

If that's what you're looking for you might want to try my Haskell effects library Bluefin (it's not an "algebraic" effects library, though). The equivalent code would be

    myFunction :: e :> es -> Exception String e -> Eff es r
    myFunction ex = do
      x <- LibraryA.foo ex
      y <- LibraryB.foo ex
      z <- LibraryC.foo
      ...
This answers the first part of your question: the presence of `ex` argument (an `Exception String` handle) shows that a String-value exception can be thrown wherever they are used. For example, we know that `LibraryC.foo` does not throw that exception.

It also answers the second part of your question: the code that runs on failure is exactly the code that created the `Exception String` handle. Any exception arising from that handle is always caught where the handle was created, and nowhere else. For example, it could be here:

    try $ \ex -> do
        v <- myFunction ex
        ...
`try` catches the exception and turns into into the `Left` branch of a Haskell `Either` type. Or it could be here:

    myFunction :: e :> es -> Exception String e -> Eff es r
    myFunction ex = do
      catch
        (\ex2 -> do
          x <- LibraryA.foo ex
          y <- LibraryB.foo ex2
          z <- LibraryC.foo
          ...)
        (\errMsg -> logErr errMsg)
So the exception thrown by `LibraryB.foo` is always handled with the `logErr` (and nowhere else), and the exception thrown by `LibraryA.foo` is always handled by the exception handler higher up which created `ex` (and nowhere else).

Let me know what you think!


Looks slick!

I'm a bit confused at the distinction of "algebraic" vs "non-algebraic" effects. Think you can give a brief example of what you mean?

Also, I know you've taken a slightly different direction with Bluefin than other Haskell effects libraries (effects as values vs types). Is this related to the distinction above?


> I'm a bit confused at the distinction of "algebraic" vs "non-algebraic" effects. Think you can give a brief example of what you mean?

I don't fully understand what exactly "algebraic" effects are, but I think they're something like the entirety of effects that can safely be implemented with delimited continuations. Since Bluefin doesn't use delimited continuations (just the traditional standard GHC RTS effects: exceptions, state, IO, threads) it's not "algebraic".

> Also, I know you've taken a slightly different direction with Bluefin than other Haskell effects libraries (effects as values vs types). Is this related to the distinction above?

No, it's an orthogonal axis. Here's an experimental Bluefin-style API over delimited continuations (i.e. supports all algebraic effects): https://hackage.haskell.org/package/bluefin-algae


> The first thing to note is that there is no indication that foo or bar can fail.

I don't see how this is different from traditional programming.

> I do think this is a really neat concept, but I have major reservations about the readability/debuggability of the resulting code.

I don't think that readability will be harmed, but I can understand your concerns about debugging. I feel that cluttering the code with consistency/error checks everywhere actually harms much more readability.


> The first thing to note is that there is no indication that foo or bar can fail. You have to lookup their type signature (or at least hover over them in your IDE) to discover that these calls might invoke an error handler.

This is not a meaningful drawback. It's 2025. IDEs have existed for almost half of a century. Crippling programming languages because you want to shove everything into the source code (in a text-based format no less) is insane. Let's actually try to move programming into the 21st century, please.

> And this cannot be done statically (i.e. your IDE can't jump to the definition), because my_function might be called from any number of places, each with a different handler.

This is wrong. Your IDE can find call sites, build a call graph, and show you the places (parents in the call graph) where an error might be handled for a particular function.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: