AI is Making Developers Dumb

Feb 24, 2025

People often talk about the productivity gains that comes from LLMs, and it would be disingenuous of me to dismiss these. It’s true. You can be productive with an LLM-assisted workflow, but that same workflow could also be making you dumber.

There’s a reason behind why I say this. Over time, you develop a reliance on LLM tools. This is to the point where is starts to become hard for you to work without one.

I got into software engineering because I love building things and figuring out how stuff works. That means that I enjoy partaking in the laborious process of pressing buttons on my keyboard to form blocks of code.

LLM-assisted workflows take this away. Instead of the satisfaction of figuring out a problem by hand, one simply asks the LLM to take a guess.

Instead of understanding why things work in the way they do, you become dependent on an assistant to tell you what you should do.

Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them. Maybe you’re just here for the money? Fair enough. That happens with every profession, and it generally shows through one’s enthusiasm and demeanor.

The best engineers I’ve met are people who will spend hours at the weekend building their own version of a tool or software. Heck, that’s where you get innovation and advancements from. You can’t find performance improvements without a good understanding of how a system works, otherwise you’re just shooting in the dark.

There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next. There is no self-sufficiency, just the act of waiting for an AI to tell them what should come next. It’s akin to where a junior in the field might start - relying on their more senior colleagues to guide them and understand how to proceed.

It’s a real thing.

Eons ago, I used to use GitHub Copilot in VS Code. When I look back on that time period now, I’m amazed I didn’t do more damage to knowledge retention.

Over time, I started to forget basic foundational elements of the languages I worked with. I started to forget parts of the syntax, how basic statements are used. It’s pretty embarrassing to look back and think about how I was eroding the knowledge I had gathered just because I wanted a short term speed increase.

That’s the reality of what happens when you use Copilot for a year. You start to forget things, because you are no longer having to think about what you’re doing in the same way that you would when you try to figure out how to solve a problem yourself.

It was actually a video by ThePrimeagen that made me realise this and confront reality. He had a clip from one of his streams where he was talking about Copilot lag. What a wake up call that was!

I stopped using LLM assistants for coding after that, and I’m glad I did.

To give an example, compilers are an area that I find super interesting. I had actually tried to work through Thorsten Ball’s Writing An Interpreter In Go at the time. But it was completely pointless. Instead of learning about the topics and techniques in the book, Copilot was just outputting the code for me. Sure, it might feel cool that you just wrote a parser, but could you do it again if you turned Copilot off? Probably not. You also lose the chance to learn about concepts like memory management or data oriented design, because Copilot just gives you some code that it thinks might work, instead of you researching topics and understanding nuances.

That actually leads into another angle. Research. This time with a more positive spin on AI.

It’s true. LLMs are useful. They’re like a search engine. We used to use stack overflow to get help with a programming problem. Since LLMs are trained on all that data, they can be effective tools to learn more about a concept. But only if you use them with an inquisitive mindset and don’t trust their output.

As they’re notorious for making crap up because, well, that’s how LLMs work by design, it means that they’re probably making up nonsense half the time. It’s just about patterns and token sequences, not real statements by people that are insanely knowledgeable. It’s trained on content created by people who know what they’re talking about, but it regurgitates it in a manner that differs from the source material.

Anyway, interrogating responses and trying to figure out why it's recommending certain approaches is the only real way to get benefits from an LLM. Treat it like a conversation with someone, where you’re trying to understand why they like a certain technique. If you don’t understand why something is being suggested, or what it’s actually doing under the hood, then you have failed.

And make notes! Lots of them! I recently started playing around with Zig, and I constantly make notes on things that I am learning about the language, especially considering that it’s my first time dealing with memory management. They can be a useful reference point and handy when you’re stuck, or maybe even something to share with others!

I wrote this post on my morning commute, and my Tube has arrived at its destination, so I’ll leave it here.