I try to not mention other blogs, or posts, or things like that. Unfortunately, it is needed. So, here we are.

Some time ago, I found this what color is your function post. I though, originally, that the “color” there was supposed to be a metaphor for other stuff, kind of like how the post about Railroad Oriented Programming was just the Either monad in an easier to digest format.

Turns out… people started to use this argument to explain why some language/library is better than other. And that is weird, and wrong, for a simple reason: there’s no such thing as “color-based function”.

Ok, I’ll say it again: there is NO SUCH THING as color-based function. And I’ll go further: this whole post is a complete lack of understanding about types, and I’m speaking as someone that likes, and prefers dynamic-typed languages, so you can imagine how freaking wrong the original author is. So let’s dive into the types, shall we?

So, I’ll start with this innocent function: nothing “async” about it:

registerCallback(f: (p: number) => number): void {
  self.callback = f
}

For those that are not familiar with Typescript notations, this is basically a function that accepts another function, which the first argument is a number, and the return is a number. This registerCallback function (method, really) returns nothing, or void. So far, so good. The type of this function is ((p: number) => number) => void, or in Java’s notation, Function<Function<number, number>, void>.

Now, how can I convert ((p: number) => number) => void to ((p: number) => number) => number? Well, if you know anything about static type systems, you know that… well… you can’t. You actually can’t. It’s impossible. Nothing is compatible here. And, guess what? The same is true for Promise types.

I actually kind of want to go to the original blog post about “colors” and get every point and debunk them, but then this post would be a million words long because… well, almost everything the author says is wrong. So I’ll stay on my lane and just explain some non-basic typing. So, let’s start with the question: can you convert a Cellphone to a Kitchen?

Well, obviously, not. It’s even weird to make this question. Which means… some types can’t be converted to others. In my previous example, I have a function that accepts a callback, and returns nothing. Can I convert a function that accepts a callback so it can just return whatever the callback calls? Well, obviously NO – the callback can be called multiple times, can be called only one time, or never be called at all, so how would a “transformation” like that even work?

Here’s another example:

function somethingThatCrashes() {
  throw new RandomException(&quot;Wow, I really crashed!&quot;)
}

What is the return type of this function? Now, this is a tricky question! Javascript (and Java, and Ruby, and Python, and Haskell…) don’t have a way to “type” that function correctly. So, let’s “invent” a new type – let’s call it “Crash”. This type could be Crash<RandomException>, meaning that every time I call this function, it’ll crash. Java have something like this called “checked exceptions” but they are (in my opinion) poorly implemented (because they’re not officially part of type). In most functional languages, we prefer to use the Either type (I’ll avoid using the M-word here). So, an “Either” is basically a success or a failure. More on that later.

Infinite loop… types?

So here’s an even crazier example:

function treatAllInputs() {
  while(true) {
    const value = getSomeValue()
    if(value == &#039;quit&#039;) {
      process.exit(0)
    } else {
      callSomething(value)
    }
  }
}

This function never returns. So what’s the “type” of this function? In type theory, there’s what we call the “bottom type”. We all know about Object in Java, or BasicObject in Ruby – a type that all other types inherit from it, that it is on “top” of all others. The “bottom type” is the opposite: a type that inherits from everyone, that can never be instantiated, because it represents these things like “this function will never return”. Let’s call this… well, Nothing because why not?

So, a function like that will never return – so can I actually call that function like const r = treatAllInputs()? No – because it’ll never return. Is a way for the language to say “this is final, we can’t do anything else after this”. Again, there’s no way we can convert a function that returns Nothing to a function that returns anything else.

So… coming back to the previous example, suppose I don’t want to treat “exceptions”. Suppose I want a better way – a way that I can make my function understand that I’m doing something that might crash, but I’m not sure if it will, and if it does crash, I want to try a different route, and if that also crashes, I will inform the user that I can’t continue working. With exceptions, we know how to do this:

function createUser(user: User): void {
  let userId
  try {
    userId = createUserInDB(user)
  } catch(e) {
    try {
      userId = createUserInAPI(user)
    } catch(e) {
      logToUser(`Can&#039;t create - error was ${e.message}`)
      return
    }
  }
  logToUser(`User created!`)
  redirectTo(`/users/${userId}`)
}

Now, let’s do something different: let’s use the Either type to signal that each function can either return a number, or an error. With this, the “type” of createUserInDB is not Crash<Exception> | number anymore, but Either<Exception, number> (usually, on “either”, the “right” type is the “right” answer… see what I did there? Yes, whoever though about this type also did!)

function createUser(user: User): void {
  createUserInDB(user)
    .catch(_ =&gt; createUserInAPI(user))
    .then(userId =&gt; {
      logToUser(`User created!`)
      redirectTo(`/users/${userId}`) 
    })
    .catch(e =&gt; logToUser(`Can&#039;t create - error was ${e.message}`))
}

So… because instead of returning the id or crashing everything, we’re returning an object, we can make that object respond to then, which will chain the next function in case of success, or catch will chain the next function passing the last error, making the “failure” a “success” for the next chain. This is a way to encode, on the data we’re returning, if something was successful or not.

But here’s the catch: while I can convert an “exception-based” flow into Either… I can’t do the opposite. Not on most languages, anyway. And the reason is simple: most languages don’t like to throw things that are not inherited from Exception (or whatever the “top error type” is). Javascript is an exception to this case. So, when you enter the “Either” flow, you’ll always be on “Either” flow…

Which cycles back to “async”. We have multiple ways to be async: it can be an event-based “single-thread-ish” flow, we can use threads that run in parallel, we can fork the app… so, let’s look at Ruby: Ruby (and Python, and others) have the “Global Interpreter Lock” which means that two CPU-bound operations won’t run in parallel (but IO-bound will). If you want to actually run two things in parallel (without using Ruby Ractors, whatever they might come to be in the future) you need to fork.

And, guess what? If you fork… you can’t “join back” your processes. There’s simply no way to do that. Sounds familiar?

Now, let’s look at asynchronous programming. It’s actually really weird how the author of the original post defined async, because multiple things can be async. You can “fire-and-forget”, like “update this cache, if it fails, whatever”. You can have a single operation that will be called multiple times like “while this socket is connected, read messages and do something, but don’t block whatever I’m doing”. And you can have a single operation like “read this file, and when it’s done, I want the result”. Only the last one applies to the whole post about colors… but because I want to bite the bait, I’ll just mention the last one too.

In Javascript, you have the Promise type. In that type, we’re defining an “asynchronous thing” happening. It is originally on “pending” state, and might fail, it might succeed, or might never transition from pending.

Which is to say: which “type” is this function, if we ignore the async part and try to represent that without Either? Well, let’s imagine a function that queries a database and gets back a numeric ID – the type (using Typescript notation and Crash and bottom type) is Nothing | Crash<DatabaseError> | number. Amazing, right? How do we “convert” that to number? We can’t – we need to encode that the function might never return (Nothing) or that it might throw an exception Crash. So, how should Javascript type that? With something like Promise<number, DatabaseError> (Typescript subtly ignores that the error from a promise is also a type, by the way). So, how do we “extract” the “number part” from the Promise? Well… we can’t.

So why is Ruby colorless”

Did you even read anything I wrote? There’s no such thing as “colors”.

What happens is that most languages don’t have, or don’t need to, type Nothing. In fact, it’s impossible to correctly type Nothing because of the halting problem, so in some languages (like Scala) typing something as Nothing is more of a “helper” instead of an actual “proof of correctness”. Since its inception, Javascript decided that the VM can never block – meaning that you could never let the possibility of a function that was never called, or an asynchronous operation that never returns (or crashes) block further interactions.

And I mean, it’s the right call – JS was originally made to run at browsers, and you really don’t want a HTTP request that never finishes blocking you clicking on a button (especially nowadays where most HTTP calls are trackers, advertising, telemetry, fingerprints, etc… what a wonderful world we live in…); Ruby, Java, Python, on the other hand, don’t have this worry, so they can block your main thread. What these languages decided was – if I have an asynchronous operation, I can ask for it to block the current thread waiting for a result (that might never come – so Nothing is covered in the sense that the function will never return), and if the result is an actual error, I’ll throw that error as an exception (so the “failure” case is covered too). Essentially, what Ruby/Python/Java/Scala/Zig do is to transform a Promise<number, DatabaseError> into a Crash<DatabaseError> | number | Nothing, meaning that the function now have three possible outcomes: lock forever, crash with an exception, or return a number.

These are design decisions on how the language (and libraries) can operate. It’s the same as Haskell and input/output: everything that comes from an “external source” (like user typing in a keyboard, a REST or database request, etc) will be “wrapped” into the IO type, and you can’t just “convert” an IO<number> into a number – essentially forcing developers to handle all IO cases separately from the “core” language (which, honestly, helps a lot on how to test functions too).

But again, there’s nothing about “color” or “lack of color” here – it’s just how the language or the library decided to handle stuff. In Haskell, you can’t “escape” IO, or Future, or any monad really, so Haskell is what? A “true-color 64 bit” language?

Also, I can easily implement “colorful” functions in a “colorless” language like:

class Future &lt; Thread
  def initialize(&amp;b)
    # This is needed so an error doesn&#039;t cascade to the thread that created it
    self.report_on_exception = false
    super(&amp;b)
  end

  def and_then
    Future.new { yield value }
  end

  def or_else
    Future.new do 
      begin
        return value
      rescue StandardError =&gt; error
        yield error
      end
    end
  end
end

Here it is – I made a Ruby Future class that behaves in a similar way of the Javascript Promise class. Nothing to do with language, features, colors, or any of this – again, it’s just typing.

And so why…?

My personal opinion is simple: Javascript is the first mainstream language to incorporate some Haskell ideas for people to use. This basically means, well, people are not aware that these different programming models exist.

There’s also a different reason: async and parallel programming is really hard. People usually think about the problem like “I have an array of elements that is populated in different threads and they sometimes get out of order” but it’s actually weirder than that – the array of elements might be populated with less elements, or it might crash your language with memory corruption on extreme cases!

There are ways to handle that, one being: instead of using global, mutable state, we use local, immutable ones. But still, we have a problem: everytime we start a thread, we have a little overhead from the operating system. We can also “deadlock” if we say “wait for this thread to finish and then do this thing” or “I want to use this resource so I’ll lock it and if it’s already locked, wait for it to be released” – like, forget to release the lock, and you halted your entire program.

One way to solve that is using what I like to call “good futures” – a Future is basically like the Promise from Javascript, except that it’s fully parallel – you declare a Future and it will immediately start in a different execution context.

And “execution context” is the key here – instead of Threads, a “good” future will either re-use a thread that was already created and now is waiting to do some task, or one that’s waiting for the resolution of something, etc. Let’s imagine this scenario with some code:

// A possible execution map for a code running 3 different threads
// I am at the main thread (Thread 1)
// This next function will start a job in Thread 2
const userF: Future&lt;User&gt; = getUser();
// This next one will not use any thread. Instead, it&#039;ll
// chain the `getRoles` into the _result_ of getUser
const rolesF: Future&lt;[Roles]&gt; = userF.then(user =&gt; user.getRoles())
// This will use Thread 3
const permissionsF: Future&lt;[Permission]&gt; = requiredPermissionsFor(page)
// This also won&#039;t use any thread.
// Instead, it&#039;ll await for the first available thread to be
// idle, and it&#039;ll use that instead
const titleF: Future&lt;String&gt; = getPersonalizedTitleFor(page)
// Finally, this will &quot;await&quot; until rolesF, permissionsF, and titleF
// finish, and it&#039;ll call a function with each &quot;resolved&quot; value
Future.join(
  rolesF, permissionsF, titleF,
  (roles: [Roles], permissions: [Permissions], title: String) =&gt; {
    if(canAccess(roles, permissions)) {
      response.renderPage(page, {title: title})
    } else {
      response.renderForbiddenPage(&quot;You can&#039;t access this page&quot;)
    }
  }
}

What can happen here? Well, lots of things – userF can use Thread 2, then permissionsF Thread 3, then userF finishes so rolesF uses Thread 2, then it also finishes, and titleF also uses Thread 2, which finishes, then permissionsF finishes and finally, the Future.join will use Thread 2 (or Thread 3). Or maybe, because this is all asynchronous, the runtime might decide that the main thread is also awaiting for something, so it dispatches userF in Thread 2, pemissionsF in Thread 3, and titleF in Thread 1 (the main thread!). Or it might dispatch all of these on Thread 2, or make it sync on Thread 1 because everything returns way too fast… it actually doesn’t matter what the VM will do – what matters is that our code is ready for it to be async – we are not using global variables, we are not “locking” so no risk of deadlock, and we’re not dispatching threads, so no risk of resource starvation or thread explosion here.

But eventually, it needs to “join back” right?

So, here’s where the problem lies. Let’s look at the most common case scenario: a web service.

In a macro-level, a web service is something that “awaits” for an input (that comes from someone requesting a page, for example) and then “responds” with a possible output (maybe a JSON containing a user data). This… is not synchronous – it doesn’t make sense to be.

Even if what you’re responding is “synchronous”, it’s… actually not. Network is not synchronous – you send a HTTP header, and you can send “chunks” of your data little by little. You absolutely don’t need to generate the full JSON in your server, and then deliver “everything at once”. It’s just that’s how most web servers run, and also how Java servlets were implemented, and everybody copied. Ruby’s “minimal handler” is Rack, and it expects you to return something like [200, {}, ["Hello World"]] – a synchronous response.

So it’s clear that everything eventually needs to be sync again, which causes all sorts of problems. Using the previous example, you can’t just call response.renderPage – you need to “transform” that into a “rack” response, which means going back to synchronous.

When you have some async code that needs to go sync, you need a “parent” context that will “join” the child context. Meaning that you can’t just let the Future scheduler handle things – and even if you do, you will need an “additional” thread to be the parent, that “joins” the child one, to have the illusion of a synchronous code. Compare that to the “full async” when, even if you have three different requests running at the same time, the scheduler can just decide to “park” the handler and call userF at the same thread as the handler, because there’s no actual need to even stay at the same thread anyway…

In Node’s express.js server-side library, by the way, there’s no “join back” – every handler receives a “request” and a “response”, and you can send the result using the “response” parameter at any time – to the point you don’t even need to send a result at all, so it’s full async – meaning that, yes, it’s possible – it’s just not how most languages implemented web servers.

Conclusion?

I honestly don’t have one. But if I can invent a conclusion, maybe I can mention this amazing talk by Bret Victor about “the future of programming” where, at 29:22 mark, right at the end of the video, Bret mentions “even more of a tragedy of that ideas not being used, is that these ideas to be forgotten” and that “the real tragedy is that the next generation of programmers grows up never being exposed to these ideas”.

The IO monad was introduced in Haskell 1.3, which was defined in 1996. All this bullsh*t about “colors” can be completely applied to the IO Monad, defined in Haskell 1.3, nineteen years before the “color” post, and twenty four years before Zig parrotted this wrong idea. And we haven’t stopped yet – Ruby’s post is from this year.

This whole thing infuriates me – programming is not “I’ll invent a new term to separate A from B, decide that A is better, and call this the truth”. But the worst part, for me, is not the original post – but the ones that say “hell yeah, my language does A, so we’re better!”. This is really bad, and we should feel bad that we’re at this point, really.