Before I even started with Clojure, I was analyzing LightTable – the idea of that editor was to support better integrations between the code you’re writing and the code that’s running. It was a really good experience, but the main problem I had is that it was in the very beginning, with few plug-ins and bad documentation. I tried to make the parinfer plug-in work in the editor, but it had lots of bugs and then I simply changed back to Atom. At the time, proto-repl was the best package to work with Clojure, and I made some small changes to it (so I could add some callbacks to when a new connection to nREPL was made, and other small issues) to improve my workflow.

Fast forwarding a little bit, I started my first Clojure job at Nubank. Most people were using InteliJ, but I felt that by using Atom I had a different approach on problem solving, specially those hard parts where the fast feedback of “run in the REPL and see the results in your editor, then browse over the keys” could give better insights about what’s happening. I tried to implement some features that proto-repl didn’t have at the time (and Chlorine still does not have some) like “automatically add nREPL port”, and “watch expressions” (almost the same as watch variables in a debugger). These ended up in a package called clojure-plus, that still exist today.

I also began to experiment with ClojureScript (at the time, only Figwheel was available – Figwheel-main didn’t even exist!) and found that existing tools didn’t provide the same power that I had with Clojure. It also didn’t have autocomplete, goto var definition, and so on. To ease a little bit these problems, I ended up adding on clojure-plus some CLJS support – when you tried to evaluate a .cljs file, it would try to connect to ClojureScript, reserve a REPL, then evaluate the code over there.

Lots of moving parts

So, why change everything? Why start a new package? Well, for starters, because I wanted to target more environments. proto-repl uses nREPL, and at the time only Clojure had a nREPL implementation. To change to socket REPL meant to change lots of things because of the way the connection support was implemented in proto. I also wanted better support for ClojureScript, but to implement it I had to work around lots of limitations (or maybe I should call design decisions) of the node.js’ nREPL library that was being used.

But I think the most important reason is because I wanted to write the package in ClojureScript. This is not a mere “I don’t like CoffeeScript nor Javascript”, it’s because of conversions – suppose I want to evaluate something in a .cljs file. The way I was doing in clojure-plus (and, at the time, the only way to evaluate CLJS) was:

  1. Identify if there’s a CLJS repl working
  2. If not, start a new nREPL session and redirect stdout, and save this session
  3. Evaluate a Clojure command on this nREPL session (to start piggieback and the REPL), and wait for result
  4. When it returns, evaluate ClojureScript code. This will pipe a string from the editor, to nREPL, to piggieback, to CLJS, to the JS environment, and it’ll return something back to CLJS, to Piggieback, to nREPL, then to the editor
  5. Convert this result to a node.js data structure
  6. Render it as if it was a Clojure data structure

Lots of things can fail (and did fail) on this scenario: the command to be sent to Clojure could fail (needs to cleanup the session); the piggieback command could timeout (needs to be sure that the REPL is still active and working); the conversion from the result to node could fail (need to think on how to interpret it); the render can be sent some data structure that it didn’t understand; it also have the question of “state”: proto-repl kept lots of state, registered lots of callbacks, and the nREPL library also kept its own states and callbacks. By definition, clojure-plus also had to keep states, and this became horribly complex really fast…

Also, let’s not forget that, with ClojureScript, I can hot-reload the plug-in. This is simply not possible with CoffeScript or Javascript – I had to reload the editor every time I made a change, and TDD support on Atom is terrible to say the least.

Migration from CoffeeScript

What I did do was try to port some of the CoffeeScript files to CLJS, with mixed results. Lots of things were missing: sometimes the REPL didn’t connect, sometimes things stopped working at all, there was little support for a “reload” workflow, sometimes compilation became completely messed up so it was impossible to continue (this meant re-starting the compiler, re-starting Atom, sometimes multiple times)… it became boring very fast (and I even wrote about it, together with other issues).

Then, came shadow-cljs. Most of the problems simply disappeared when I migrated to shadow, and again, I wrote about it. I began to experiment a little bit, and found out that it would be easier if I changed everything to ClojureScript, instead of working with a mix of Coffee, JS, and CLJS again. The experience was simply awesome (if you compare the two posts about Atom packages with CLJS, you’ll see that this second one takes half the words to explain how to compile to Atom, then keep going about how to work around Atom limitations – there are no more CLJS ones!).

Socket REPL, and Microsoft buys GitHub

I began an experiment to connect to Socket REPL, purely from node.js using ClojureScript (no dependencies at all). I started with the simplest runtime I could think of: Lumo. It was a very bad idea, because there are lots of things that Lumo does that other environments simply do not, like “pretty print eval results” by default… I went down the rabbit hole a lot, and found out that I still didn’t have a working prototype after more than a month. Note to myself and future programmers: beware of big design up front!

I did all this code on a separate repository called REPL-Tooling, because Microsoft had just bought GitHub and I was fearing Atom would become unmaintained, so the intention was to make a foundation for other editors to work with. Again, I suffered from BDUF – while imagining how editors should connect, I wasn’t really testing if that code would work in a real context. In the end, I ended up re-using some parts of proto-repl (like block, top-block, selection, namespace detections), moved lots of things to a re-designed version of my package (that I called clojure-plus-reloaded), and began to wire up some socket REPL code with UNREPL. When I was able to evaluate simple things, see exceptions, wire autocomplete (with Compliment) and refresh (with clojure.tools.namespace), I wanted to see if I would be able to make this new package understand ClojureScript. As I was only using shadow-cljs (and still am!), I made support for it only (and its still the only compiler that works well – no figwheel or weasel, yet). I also used the own package to develop itself (and I still do this today), which was a great way to test if everything was working properly: any trouble and I would know immediately, because it would show on my face. Also, just for fun, I also made it possible to connect Lumo and Plank.

One day, I was working with Hubot (a node.js chatbot library) with ClojureScript on my job, and found out that I was auto-completing ClojureScript code (with some self-baked cljs autocomplte), showing documentations, evaluating code, and didn’t realize until later that I was working on ClojureScript! That made me ask on Clojurians for a better name for the package, then on 2018-10-13 I published the first version of Chlorine.

Different workflows, same plug-in

Two months after the first release, in a discussion about editors on the #off-topic channel of Clojurians slack, Sean Corfield and I started to add more features to Chlorine. We found issues, made PRs, added REBL support, and as he’s a fan of “spartan tooling” (no external dependencies at all), I was able to test the plug-in on an environment that’s completely different from mine (I’m a fan of Compliment, Orchard, clojure.tools.namespace’s refresh, and so on). To accommodate different workflows I made an addition where you can configure Chlorine using custom code from init.coffee (or init.js) file on Atom. He also added lots of different support for REBL using this extension method, so I’m thinking if it would be better to simply the “old versions” of REBL from Chlorine.

Also, after lots of conversations with multiple people, I decided some non-roadmaps: for example, in the beginning I was thinking if it would be possible to append libraries like refactor-nrepl, Orchard, Compliment, when connecting to the REPL. In the end, I decided against it – first because it’s Clojure-only, and second (and most important) is that I found out that more people are using Chlorine connected to production environments as some kind of “remote debug tool”, so injecting dependencies on it is a big no-go. So, no magic, telemetry, “code injection” in any way is going to happen on Chlorine – ever. It will also have no “jack-in” process for the same reason, and others more.

More plug-ins, and Clojure flavors

Over the time, I migrated almost everything that I had on Chlorine to REPL-Tooling, and then tried to integrate with NeoVim, with mixed results.

Then, I was invited by Jacek Schæ to speak at the ClojureScript podcast, and after that Michiel Borkent (AKA Borkdude) tweeted me to inform that Babashka had a Socket REPL. After collaboration from both parts, we were able to add support for it on Chlorine – then I extended the same code that I did for Babashka to add support for Joker, ClojureCLR, Arcadia, with help from Sogaiu, and later Clojerl. While doing these integrations I migrated almost everything away from core.async – it simply does not work with ClojureScript (and by that, I mean it – there’s no way to safely convert from callbacks and promises to core.async channels, and the newest addition to support promises is dangerous and can break the node runtime, unless they also changed their underlying state machine!) so I ended up with a atom and add-watch approach that is both simpler, easier to test, and easier to extend.

When I was able to reliably detect top-blocks, blocks, and namespaces I also tried to integrate with VSCode, with better results than the NeoVim plug-in, but still not as good as I wanted: there are lots of problems with the API of that editor, and as more and more editors are adopting the plug-in infrastructure of VSCode (to the point that Eclipse is making a different marketplace for plug-ins), this is probably something I’ll have to live with. To be honest, I don’t expect to replace Calva‘s place for Clojure, but I expect that for non-Clojure (or even Clojure with REBL) it will be a better fit. Peter Strömberg (AKA pez) is also helping me on non-trivial parts of the plug-in, like indentation, Clojure grammar (var names and such) and playing nice with other VSCode packages.

Non-Clojure targets

And, lastly, after talking with Bozhidar Batsov, I made some experiments with nREPL. To my surprise, some targets like HyLang and Racket worked out of the box, so I decided to include nREPL on Chlorine. The interesting part is that it helped me get rid of some preconceptions I had with nREPL: on an older post I said:

…performance is a nREPL issue – the protocol is simply too complex, adds a lot of overhead…

and this is simply not true at all. nREPL is quite simple, and when I decided to implement it, I made a custom bencode encoder and decoder, and the performance is even faster than my approach with unrepl or pure socket REPL. Probably the performance issues I was having were due to a slow nREPL node.js package, or a slower bencode implementation. The thing is, currently it is faster to evaluate code on nREPL than on socket REPL.

The most interesting part, for me, is that I was already thinking about changing the way I send results to Atom editor. I want to avoid blowing up the editor when a command returns big results (like it does now when you evaluate a big list on ClojureScript, for example). Currently, unrepl does this job but it’s a little awkward to work with it (you have to keep evaluating code on the REPL to get the full result). nREPL will be a great playground to test these new ideas!

Also, the same guidelines apply to it: it’ll never inject any code, it’ll not support jack-in, and it’ll only extend the build-in operations like autocomplete, documentation, etc if additional libraries are on the classpath!


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: