This post summarises some recent experiments and learnings around concurrency & Koka. There’s no immediate application yet, just a bunch of thoughts which might be interesting if you’re into concurrency, parallelism, or Koka. If you’ve never heard of Koka before that’s OK, you don’t really need any prior knowledge (but I wrote about it here).

Koka and concurrency:

There are a few active avenues of interest when it comes to Koka and concurrency.

First, there’s the async effect, currently implemented in the community stdlib. That effect allows a function to suspend execution and await the result of a callback, as well as the ability to execute multiple async operations concurrently. It’s basically an implementation of the async / await semantics in JS and many other languages, but as a library (rather than built into the compiler). Currently this only supports koka code compiled into the JS backend. Under the hood, an async operation is represented as a continuation function - when the operation is complete, that function is invoked and (from the perspective of the Koka code you write), the async code resumes from where it left off.

There’s also work going on to create libuv bindings for koka. There’s a fully-featured attempt in this community repo, as well as a more minimal version in koka itself with just the core scheduling primitives. This work allows execution of async koka code in compiled binaries (the C target).

Both of these are currently limited to single-thread concurrency1, i.e. concurrency-without-parallelism. This is nothing to sneeze at, it’s done well enough for NodeJS for more than a decade, and it’s a step up from many scripting languages which only support synchronous IO.

Koka and parallelism:

But at the same time, Koka has some great fundamentals when it comes to true parallelism (with multi-threading). The way to share mutable state is via a ref, which is already thread-safe. And Koka’s reference counting algorithm was designed to perform well for both single and multi-threaded environments.

All this is to say: single-threaded concurrency is cool and all, but there’s no technical reason Koka couldn’t support true parallel concurrency like Go, Rust, OCaml and Guile Scheme.

What’s Guile Scheme? Don’t ask me, I’ve never used it. but I think of it often when it comes to concurrency. Years ago, I read Andy Wingo’s excellent series on implementing Concurrent ML primitives in Guile Scheme. It’s stayed in the back of my mind as an interesting point in the concurrency design space, and seems to keep coming up as a lesser-known approach which ought to be more widely known and adopted. See also Concurrent ML has a branding problem, where the tl;dr is “Concurrent ML’s primitives are great but the terms used to describe it are confusing so people ignore it”.