I’m sceptic (understatement) of using language models for programming. But the amount of posts coming from people I respect claiming they are a game changer made me feel pressured to get them to work in a way that they don’t interfere with my thinking.
Those who use agents regularly to transform a natural language specification into an implementation with as little intervention as possible insist that you are still in the loop, thinking by writing and refining said spec. And by reading an validating the result, wink wink. I reject that premise. I believe it’s too early to see the effects, but in the long run it will be obvious that thinking at the level of a programming language is different, way more detailed, than thinking just in a natural language. I’ve strongly held this idea since the beginning of my career, when I was the one implementing, and fixing by implementing, the awful designs of the architects that were in charge, earning more.
So I’ve been writing some stupid functions to call Git from Neovim and drop the output in special buffers (because I have opinions and none of the existing solutions satisfy me). And at the very beginning I had an implementation with all the functionality I wanted, but lacking the necessary boilerplate to be usable. In particular there were two TODO comments: “handle errors” and “make async”. As I knew replacing them with an actual implementation wouldn’t spark joy, I decided to ask Claude to transform the comments without making any other kind of change. Unsurprisingly it did it, after all folks insist writing boilerplate is where LLMs shine. It wasn’t correct though, I had to manually move code around (specifically, some functions that must be called in the main thread), but whatever. The scope was so tiny that I knew what to do. There was no big unfamiliar code I had to make sense of. It was just part of my thinking flow. And I believe that if it had come with something clever I’d never seen before I would have been baffled by it.
Since I read it, I’ve been applying what McConnell calls the pseudocode programming process (PPP) whenever something is too hard to write in one go. It’s described in section 9.2 of Code Complete, but it can be summarised as implementing something by first writing it in high-level pseudocode, and keep refining it until you end with an actual implementation. Whatever remains of the pseudocode shall most likely be kept as comments.
It could be tempting to stop once the pseudocode can’t be refined anymore, and ask a language model to proceed with the implementation, but I like writing in a programming language. I like thinking in a programming language. That’s why I believe this inversion of the flow matters. Instead of asking for as much as possible of an implementation that I’ll have to check later, wink wink, I drive it and do all the thinking. The most important thinking anyways. The model fills in the gaps I’m consciously leaving, and that most likely I’ll have to fix anyways.
To be clear, I’m not interested in productivity gains. I didn’t get into programming because I wanted to be productive. I was drive to it because I was fascinated by computers. I couldn’t stop thinking about computers. And once I started to program I’ve been constantly thinking about different aspects of programming languages regarding their expressiveness. So I don’t care at all that what I just described is slower than “agentic coding”.
I wrote a version of the above as a Mastodon thread. And then I read Ironies of automation by Lisanne Bainbridge, a paper that, quoting its abstract, “discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator”. I got several ideas from it, but the most important is that regardless of how automation is performed, whatever levels of “intelligence” an automated system has, it’s rarely designed to collaborate with the human that formerly performed the work. On the contrary it’s designed to replace them. Which is an irony (as defined in the paper) because (among others) “by taking away the easy parts of his task, automation can make the difficult parts of the human operator’s task more difficult.” Even more, automation seldom “consider the integration of man and computer, nor how to maintain the effectiveness of the human operator by supporting his skills and motivation. There will always be a substantial human involvement with automated systems, because criteria other than efficiency are involved”.
We know language models have been touted as tools that enable computers to replace humans intellectual labor. And here we have a 40 years old paper saying actually that has never worked. I don’t believe language models or any other form of artificial “intelligence” is here to stay, but if that’s the case I hope it stays as a form of human-computer collaboration. As my skills and motivation lie in writing code, I’ll keep stubbornly rejection any trend that clashes with them. And as I keep feeling pressured to use these things, I’ll try to come with more silly ways which I feel that at least they don’t erase my motivation.
So will I eagerly apply what I just described? I surely will when copy and pasting my code on a chatbot window is cheaper in motivation terms than filling in the gaps myself. The way to smooth the process is to run a local model, or pay for a service. For now I don’t want to do any, so who knows. At least I feel a little less dumber now.