Why I don't use AI coding tools (and why science agrees)

Posted on May 16, 2026
Tags:

I would like to present an opposing view to the idea that “Traditional programming is dead”, which is being announced by software development influencers. Or maybe it’s just social media bait? Either way, there’s a lot of push to use AI tools to write code nowadays. Even many experienced developers are saying they’re not writing code by hand anymore.

I’ve been programming for something like 27 years. I don’t want to use AI coding tools, and I don’t think you should use them either. I’ll also show you some research papers to support my claims here, so it’s not just an old man shouting at a cloud.

Before I get into the details, I need to address a couple points quickly:

  • When I say “I don’t use”, I mean I avoid using GenAI tools. I occasionally experiment with them, but generally I’m not happy with the results, or the task was trivial or a “chore” task. It more or less amounts to not using them.
  • If you prefer using LLM’s for coding, great. Not everyone has to be a programmer, and I think there is value from lowering the barrier of entry to building software.
  • I’m not going to go into the moral, copyright or societal issues. They are factors, but I’m more interested in a discussion of the technology and its impacts on the individual programmer.
A drawing of the author shaking his fist at a cloud containing AWS, OpenAI and Claude
My most flattering self-portrait.

GenAI tools get in the way of doing my job

I’ll start with what’s probably most relevant to you as well: The impacts of AI in context of working in software development. My dayjob requires a deep understanding of how our product works and is built. It’s a complicated piece of software, with multiple complex moving parts, built over more than a decade at this point.

A large part of my job involves discussing how features could be implemented and in what kind of schedule, architectural decisions, and other higher level things requiring technical expertise. If I was generating code on a larger scale, there’s no way I could do these things.

When I generate code using AI, I lack all the design insight: why this change needs to be done, why it needs to be done in this particular way, what other approaches could there be and why did I not choose to use them, and so on. Why not just ask AI to tell me these things? Because it’ll just hallucinate some gobbledygook. It’ll happily justify and argue absolutely any point whatsoever, and if I question its choices, it’ll immediately flip its position and tell me something different.

As a result of this, it would for example be very difficult to evaluate the feasibility of some change to it if I wasn’t reasonably familiar with the code. As a slight digression, you could say using AI generated code makes me lose the “theory of the program”, which is something I wrote about last week.

This relates to concerns of “Cognitive Debt” raised by GenAI research. For example, MIT’s paper Your Brain on ChatGPT found LLM users were worst at quoting text they created, and the activation in their brains resembled moving text around, instead of learning something. This means reading GenAI code doesn’t help understand the code in the same way as when you write it yourself. The conclusion is that you would need to spend a significant amount of time to understand and process the AI-generated code… at which point you might as well write it yourself.

A study released by Anthropic, How AI impacts skill formation, supports these claims. Their findings suggest that using AI to generate code results in a worse overall understanding, especially in debugging tasks. This is again unless you use the AI in a way that bolsters your own understanding… at which point you might as well write the code yourself again.

The negative impacts from AI on skills are found in other places too. Yet another study showed endoscopists lost their skills after working with AI-assistance. This goes to show that the problem applies across specialities. How am I supposed to do my job if I lose the skills I need to do it?

Secondly, AI’s just aren’t good enough. Attempting to slot GenAI tools into my work is a waste of time. I’ve tried, I’ll give you an example:

There is a component which uses math formulas. I was never a “math guy”, so it quickly became frustrating when I had to implement matrix maths involving translation, rotation and scaling. I figure I’ll just ask Claude and let’s see. After much trial and error and bouncing between two nonfunctional solutions, it finally outputs bug-free code. I mostly understood the logic - I know enough matrix maths to be dangerous thanks to my gamedev hobby - but the order of operations was weird. It wasn’t a problem… until a tiny change to the requirements. Back to trial and error, bouncing between nonfunctional solutions, and at that point I gave up and rewrote it myself like I should have in the first place (and yes, the order of operations was wrong, the generated code worked purely by coincidence)

And that was just the latest attempt of many.

GenAI tools impede improving the code

Working on code gives me better insights into it. Recently, I was fixing a bug, and I found the problem was partially caused because the code didn’t take certain domain model concerns into account. To fully address this, some new types or concepts might need to be introduced into the codebase. The importance of this kind of insight cannot be understated: they can help to entirely prevent certain classes of bugs in the future.

If I had fixed this using GenAI, I would never have this insight, even if I went over the code it generated. I would see that it changed some parameters to a function call, sure that makes sense, the parameter was wrong. But because I’m merely looking at the output, I can’t see the reasoning for the change, so I won’t learn that the domain model’s representation may be flawed. The AI certainly will not figure this out either. I wouldn’t even know to ask it “is the domain model wrong”, and even then, how would it know? Only a human programmer can feel the clunkiness of some API when we get frustrated by it.

These kinds of insights directly contribute to how quickly you can fix bugs, how quickly you can add new features, and ultimately how quickly you can ship improvements to your users.

A Microsoft study on the impacts of GenAI tools on critical thinking showed that with AI tool usage, the focus shifts from problem solving, analysis and evaluation to verifying the AI response is correct, and integrating it into the work. If you’re not problem solving, analyzing or evaluating, how are you going to develop insights on potential issues in the codebase?

Critical thinking is defined as the ability reason about information and make informed decisions. These seem exactly the type of things you need to improve software in the ways I suggested above. Yet another study related to AI’s effects on critical thinking shows concerns over loss of critical thinking skills and the ability to perform tasks independently without AI assistance. Let’s be generous and say that AI doesn’t affect your actual critical thinking skills, but only impacts it in context of the work you delegate to the AI - This still seems to directly indicate that it would be more difficult to make informed decisions about your codebase.

There is a lot of further anecdotal evidence about this as well. For example, Simon Willison, the co-creator of the Django framework, writes on the topic of Cognitive debt:

I no longer have a firm mental model of what they can do and how they work, which means each additional feature becomes harder to reason about, eventually leading me to lose the ability to make confident decisions about where to go next.

GenAI tools impede on learning

This one is maybe a bit more personal. If I use GenAI tools, I’m not going to learn and improve, be it either for things that I do at work, or things that I do for fun. It’s probably kind of obvious that of course you won’t learn if you just let AI do it all, but let’s look into it a bit more.

I like programming, and I like thinking of how to best solve some problem via code, how to best architect some software, and various other things relating to programming. If I delegate this to AI, I’m not going to get any better at these things, or understand them any better.

In addition to the Anthropic study I linked earlier, there is a growing body of research that indicate GenAI tools have a negative impact on learning. The Anthropic study on skill formation I mentioned showed AI-users had a statistically significant negative impact on their learning. This study was specifically in context of programming, as the participants were told to learn a Python library, and the success of this was then measured.

Anthropic published another study on how AI is affecting work at Anthropic, where multiple developers report concerns over their skills atrophying, or never even learning certain things in the first place as a result of delegating the work to AI. The Your Brain on ChatGPT study I mentioned earlier also shows that AI users had the worst connectivity and semantic activity in their brains, which is a clear indication that they’re not processing information very deeply.

The study above from Anthropic highlights another issue: The “paradox of supervision” as they call it. As you use GenAI tools, you need to supervise their work and verify the output is good. But if you don’t learn or your skills atrophy as a result of using the AI tools, you won’t be able to accurately supervise their work either. This seems to lead to a compounding negative cycle of increasing reliance on AI, without the ability to verify or correct the work.

Another anecdotal example which I recently saw on Hacker News, where James Pain writes AI is making him dumb:

With coding, I’ve been using AI entirely for a year or two. I’ve been entirely prompting and I haven’t written a single line of code. I have mostly forgotten how to code, which I find very sad and depressing because coding used to be my life.

It just isn’t fun

I like programming, and I like thinking of how to best solve some problem via code. Wait I think I said that already. But yeah, using AI tools just isn’t fun. They are about as fun to use as running a bash script.

A chart showing running a bash-script is a lot of fun

Surprisingly, science agrees with me even on this point. A study on relying AI at work showed AI tools made the user feel less skilled, made their work feel less impactful, less purposeful and less meaningful. These are all factors which affect how much you enjoy doing something, and even higher amount of pay can’t fully replace them. This definitely sounds like “less fun” to me.

There’s plenty of anecdotal evidence about this from other programmers on Reddit as well.

You could say my perspective on all of this is very programming/development focused. There is also the “results” perspective to this: Is it more important to write code, or to deliver a particular result? I’m not convinced that AI tools can keep delivering results long-term, so the results perspective seems flawed. But alas, as I don’t have a crystal ball, I’ll have to leave divining whether this is really the case to someone else.

Comments or questions?

If you have any comments or questions about this post, feel free to email me to jani@codeutopia.net, or use any of the other methods on the contact page.