I’ve been deep into some personal projects in the last few months, so this post is to make an update on everything I’m doing.

First things first, I am still working on a new version of Chlorine, but I decided to change the name. So far, the name “Alya”, is winning, but I don’t know if I like the sound of it – it’s still growing on me, and I don’t want to name something that I will regret later.

The reason I want to rename the project is basically because this new version will not use the socket REPL anymore-it will use the nREPL, the same protocol everybody else is using, and from a historical perspective it’s interesting to have some plugin that’s easily searchable – that’s basically “the path not taken”, the “ideas that were”; and finally, if in the future somebody wants to explore the same path, that person can look at my code and maybe get some ideas, or even decide if they want to maintain a fork of it.

What will be this new version of Chlorine?

This new version will basically embrace the nREPL protocol and for ClojureScript it will use the Shadow Remote API. From a technical perspective, it basically means I want to integrate better some middlewares, maybe add some kind of FlowStorm support, and also have a better support for ClojureScript. I’m also working on better markdown support, meaning that when you want documentation and source for a var,, you will get a syntax highlighted editor with the same colors as you have in Pulsar.

Also, I’m optimizing the rendering process-basically I am dropping Reagent (expect a new blog post about this in the future). The idea is using Reagent was very good when I had to update multiple things at the same time, a reality that I had in Socket REPL and UNREPL. The way things used to work is that, we had “elisions” – when Chlorine got a big collection, it would instead render the first ten elements, then have a ... link that could be clicked to ask for more elements.

This is not a reality anymore-I am not doing elisions and even if I do want to render things lazily in the future, it won’t be that complicated, so I have no reason to update multiple places at the same time. Also, Reagent offers some problems when you have to update components from inside itself, like I had to do a lot in the past; also, in some places (lots of places) I had to fight against the React model (when it basically got into an infinite loop, updating the UI, reverting back to the old state and updating again). I had to fight performance problems, when either the UI updated more than I wanted or worse yet, when it did update a very small part of it, but the code had to recalculate a bigger element just to decide that what changed was a very small part of the UI.

But the worst part was how hard it was to interface with pure HTML libraries. With React you can’t change the DOM as much as you want, so you have to rely either on useRef or some insecure way of setting the inner HTML… and that basically got tiring really fast-debugging React stuff that don’t re-render, or re-renders too much, is tedious; now imagine that, combined with manual DOM manipulation that React doesn’t really like me doing…

Chlorine for… Ruby?

The next thing, that I am quite excited to present, is Lazuli – a Chlorine port for Ruby. Sounds weird? Well, it is – basically, I implemented a nREPL server for Ruby, but instead of being a “normal” nREPL server, it keeps some “bindings” to local functions and methods. What this means is that when you start a nREPL server from Ruby, and interact with your app, you’ll get some “local bindings” set for you. Lazuli can then read these bindings, and evaluate code for you with that binding – meaning, for example, that if you’re trying to develop some new action in Rails, you just need to write the “bare minimum” to render a text, or a simple HTML thing, and then Lazuli will keep a binding for you that you can inspect and even get local, “magical” variables like params or included modules, helpers, etc; it’s actually quite amazing, and I’ve been “battle-testing” Lazuli for a while now, and it works quite well, even with non-trivial code like, for example, Discourse’s code.

Lazuli also have a version for VSCode, which is still in “beta”. I’m unsure if I’ll publish to the VSCode “official” repository, but I’ll publish to OpenVSIX when I feel confident that the port doesn’t have too many weird bugs.

Pulsar

I’m still pushing forward to stabilize Pulsar in the latest Electron versions. So far, I found some weirdness – some callbacks crash, some errors happen in some packages, and in general things are not 100% stable yet – but it’s progressing.

One thing that I absolutely don’t want is to break existing plug-ins that already work in stable Pulsar, and that to this day are still used – terminals and Hydrogen. Unfortunately, seems that Hydrogen’s authors don’t want to publish the plug-in to the Pulsar’s repository, which probably means they won’t accept a huge PR upgrading libraries like I have to do. So I am working on a port called Hydron which is basically Hydrogen, pre-compiled to JS (I don’t really like Typescript), upgraded to latest npm version (so packages won’t fail to install nor it’ll have to use some --legacy option of NPM) and finally, with an upgraded version of ZeroMQ and JMP (so that it don’t crash in the latest Electron version). So far, it seems to be working, but again, I also need to publish it to the Pulsar’s repo

Local AI

It’s inevitable that we’ll be using AI more and more, and Pulsar needs some way to integrate with existing AIs. I am also working on some plug-ins for Pulsar, maybe things that make the editor feel more “modern” like LSP support, some UI elements, and other stuff. These are all present on the Star Ring metapackage and one new experiment is called Minvac (based on the “Multivac” super-computer from the Isaac Asimov stories) where I am trying to integrate a local AI in my current workflow:

Example of Minivac runnin in Pulsar

Again, it’s not perfect, but it kinda works. I am… honestly divided on integrating ChatGPT or Copilot to Pulsar, because as I said before, I don’t agree with their market practices… but still, maybe a Local AI can be less aggressive in the sense that it’s publicly available, the code never leaves my machine, and I’m not cooperating, for free (well… actually… considering that if I want to use any API for these “code AIs” I have to pay, I am actually paying for them so that I can make their product better with my data so it’s actually worse) for a ultra-filthy-rich person get even more filth rich… sorry for the rant, maybe you can pass that as an “old man yells at clouds”.

The future

So, I’ll keep working on Lazuli for now – I have some interesting ideas that might work for a better Ruby experience. I also wanted to try a “Javascript version” of it, one that I could use the “Chrome Devtools API” to simulate binding in Javascript, and maybe somehow get some interesting results on it (except that I would probably have to use Shadow Remote API instead of implementing a nREPL server, otherwise I can’t run a REPL in the browser) – but I don’t know, seems like a lot of work honestly, and I don’t do too much of front-end (and when I do, except for work, it’s all ClojureScript which already have the experience I want).

For newer Chlorine/Alya, I am still thinking on a way to integrate some Visual Tools into it. Now that I don’t have Reagent anymore, things are easier – it’s all DOM APIs down to the last point.

Finally, Lazuli will probably be my primary focus for now – I am working with Ruby and I do need better tools for the language. I also invite people to try Lazuli – it’s very usable, and you might be surprised that, even for such a new project, it works so well!