I have to be honest: I feel I’m taking crazy pills recently.
There were a growing number of posts, in the past, praising AI for coding; then, it started to grow, and grow, and now, on LinkedIn, almost all the posts that come to me are about AI and coding, how good they are, how the “game has changed”, etc, etc, etc.
But here’s the catch: every bad practice that we fought against, over and over, for literal decades is now being praised because “AI is doing it”.
Every. Little. Thing.
From the most obvious to the less, against all good judgment, against academic research that shows that AI usage decreases contribution between coworkers, or that they make programmers slower while they think they are faster, that the code quality is lower, or that it makes people work longer hours, it seems that all rules are off when it comes to AI.
Lines of Code.
We all know that lines of code are a poor measure of productivity. That “features”, “number of bugs added” (less is better), “number of bugs fixed”, are better metrics, and that actually less lines of code is usually better, for they offer few places where a bug can manifest. This have been studied in the past, and there are some stories about managers trying to make developers write how many lines they wrote, some with interesting results. Unfortunately, this era is over; lots of LinkedIn lunatics members say that they are producing 35k, 40k, even 60k lines of code per week, with some arguing that they are producing a million lines of code.
A million. 1/40th of the Linux kernel.
This probably isn’t true: nobody is writing 1/40th of the Linux kernel per week, hopefully. But if they are… how are we going to maintain this? Are we supposed to believe that AI will magically consume 1M lines of code, then spit another million lines of code with more code, every week? That it’ll somehow “fix bugs” that appear on this massive codebase? And that is the “new normal”?
But AI subverts this. AI-written code can be measured by lines of code, it is a good thing somehow, and it’s desirable… for some reason. After decades of fighting against this horrible, horrible metric, we somehow surrendered for… reasons? Why?
Segregation of Duties
A long time ago, I learned about Segregation of Duties (SoD, for short). It’s basically the idea that for complex, dangerous, or possibly fraudulent operations, a single person can’t do multiple jobs, for they might offer a conflict of interests. For example, someone is responsible of paying for a purchase in a company, but can’t be responsible for choosing the vendor – otherwise, a single person could select a parent’s vendor, approve, and pay for that, keeping a percentage of that purchase for themselves.
In software development, that usually translates to someone that decides how the product will work (a “Product Manager” or “Project Manager”), someone will work on guaranteeing the quality of the product (QA team, for example), and others to do the product itself (Developers). Even in this case, we usually separate the one that writes the code and the ones that review it – either via Pull Requests, or via Pair Programming, to avoid a single person knowing everything about the product, and also to avoid “bad engineering” crawling into a part of the program (for example, wrong levels of abstraction, code duplication, etc). We also usually use some automated test discipline that try to avoid tying a test with a specific fragment of code (no dependency of internal data and structure in tests, for example), and try to see a failure on tests before actually writing the code (part of the Test-Driven Development discipline) otherwise we risk testing the wrong thing.
Again, AI subverts this. We are entering the phase when there are “AI Code Reviewers”; which essentially means a single “tool” writes the code, tests it, and reviews it. If this isn’t a SoD violation, I don’t know what it is.
And yet, because it’s AI, magically “is fine”.
Copyright
I am not going to explain myself even more on this one – I’ll just let you know that if you – yes you, reading this blog (unless you’re a billionaire reading this, then disregard everything because the laws and rules don’t apply to you somehow) tried to do what Meta did, downloading literally millions of books, even if we didn’t even open the downloaded file – you would be in jail. Even when things are completely legal – like Emulators, for example – some BigCorp can issue weird and misleading info about it, and somehow take them down – usually because developers and creators can’t fight back.
Again, somehow this is fine – most developers I talked about this issue are completely fine with AI companies doing blatant copyright violations, while we can’t even use 5 seconds of a song in a video without being hit with a strike.
Security
What happens when a tool, over and over again, is vulnerable to command injection, and discloses private information with very simple attacks? We probably won’t use that tool anymore.
A developer? They would be fired.
But with AI? Well, it doesn’t matter how insecure your app is – if it becomes viral, without actually offering anything new, without having any known usage, even when proven that it was a scam all along, it’ll probably be bought by some big corporation.
This is… infuriating. Years of years of trying to secure our apps (nowadays writing dynamic web pages is a nightmare because we need to be aware of cross-site scripting protections, non-inline Javascript, and a couple other dozen other forbidden techniques that make frontend even more of a cursed incantation than it was years ago) just for AI to be excused for all good practices in… well, anything, really – just for AI to be given a free-pass to be as insecure as possible, in multiple ways.
I mean – SQL Injection is considered a solved problem, and to this day, we don’t have a “sane way” of avoiding prompt injection!
Energy.
Some years ago, I was discussing with a friend about water shortage – how the whole thing seemed exaggerated in a sense. My argument was that, essentially, we know how to purify water and desalinate water – we have the tools for both, meaning we could get potable water for the whole world.
The counterpoint that he gave me was that the desalinization is very expensive and uses a lot of energy. My counter was to say “so, the problem is money. We can’t spend money to get drinkable water for the whole world”.
Years later bitcoin happened – basically a hugely inefficient (by design) currency that uses a lot of energy to “mine” (discover new hashes, which is the currency basically) but more than that, it also uses a lot of energy to just keep it running.
And now we have AI.
And the interesting part is the arguments against desalination didn’t change – and this sounds, to my ears, that drinkable water is not important. What is really important is that we have a video generator so that a country leader can make videos of him dumping shit on his population. Or, plagiarize a famous Japanese author that said this kind of technology is an insult to life itself, using his drawing style against his will in the most disgusting ways.
Apparently, these are more important than having drinkable water for the whole world. Generating memes. Fake videos. Even pedophilia, it seems – because it’s AI.
Humanity
When I was at the school, we studied something called “dehumanization”. It’s the process to remove “human things” from everyday life. The examples were very “mild” by today’s standards – pay before you eat (the pleasure of the one that sells comes before yours); eat in front of a wall (nowhere to look at, eating like animals) or standing up (so you’re not comfortable); absence of sidewalks (humans are not welcome – you need a car), etc.
Nowadays, people write things for other humans with help of AI. WordPress have a “Write with AI” button; LinkedIn, too.
Why?
Why would I read an article about something that was generated via an artificial intelligence, when I can, myself, go to the artificial intelligence and ask about the subject?
Why would I want to expand my 20-word e-mail with AI, so that it becomes a 200+ word, just for another person to ask to summarize back in a 20-word text?
Why would I want AI to write a message for a dear friend? To play music? To draw something? Even code?
Let me tell you: I want to play the piano. I would love an AI that would help me translate a song into a MIDI so I could use some software to help me learn the piano; or one that would “simplify” a sheet music so it becomes easier for me to play, until I learn the complex version; or one that would discover the best finger positioning for a piece.
AI does none of these things. And I’m not saying “do well”, I’m literally saying that it does nothing useful. I tried to generate a MIDI of a song, it generated random noises that were completely different from the original music. I tried to simplify a MIDI, it either removed all notes, or kept them but with smaller intervals;
It feels completely useless to do the boring parts of life, but insanely competent on doing the fun parts – the ones that give us happiness, meaning, joy.
Why dehumanization was such a bad word 3 years ago, and now… people are happy with it?
And this is not even the worst of all of this – it seems that AI is a HUGE risk… if it doesn’t kill people by itself.
Thou shouldst not have been old till thou hadst been wise.
Like the fool in King Lear, we are not ready for this tool. We became “old” before we became “wise”. Every stupid, racist, idiotic thing that we did, that we never solved, and that we somehow never addressed and buried thinking it was going to go away… resurfaced.
And now, on steroids – at the speed of light (or energy, or “tokens” if you may), global, privatized, monetized.
And no one can stop it. For… reasons.
But mostly, “convenience”. As always.