In my previous post, I talked about the need to recognise when an “AI” coding assistant is circling the even horizon of a “doom loop” and take the wheel. Taking the wheel, of course, requires that you can still drive and you know where the car’s supposed to be going. In the next post I’ll … Continue reading "The AI-Ready Software Developer #10 – Comprehension Debt"| Codemanship's Blog
A very common experience for LLM users is what I call the “doom loop”. You ask the model to do something, it gets it wrong. You say “That’s wrong”, and it apologises , “You’re absolutely right. Louis Armstrong was not the first person to set foot on the Moon. Let me try that again.” Then … Continue reading "The AI-Ready Software Developer #9 – Well-Trodden Paths"| Codemanship's Blog
This new “age of AI” has produced a paradox. While individual developers report “huge” productivity gains, bashing out code faster than ever, these gains mysteriously evaporate when we observe what actually makes it into the hands of end users. Actually, there’s no mystery. We’ve understood for many decades why individual productivity doesn’t translate into team … Continue reading "The AI-Ready Software Developer #8 – Continuous Integration"| Codemanship's Blog
Imagine you’re walking a tightrope tied to the peaks of two mountains. When you reach the middle, it’s a long way to safety – forwards or backwards – and a long way down if you fall. Changing code’s a bit like walking a tightrope. Every step we take risks a fall, and the more changes … Continue reading "The AI-Ready Software Developer #7 – Commit On Green, Revert On Red"| Codemanship's Blog
Finally, we get to the “R” word. Our software works. We know, because we’ve been testing it continuously. And we’ve reviewed the code at every step, looking for areas that might need clarifying, looking for duplication that might need consolidating and abstracting, looking for modules that do or know too much, and/or are tightly coupled … Continue reading "The AI-Ready Software Developer #6 – Continuous Refactoring"| Codemanship's Blog
So, we’re working in small steps, solving one problem at a time. We’re clarifying with examples to reduce the risk of models grabbing the wrong end of the prompt stick. We’re cleanly separating concerns to localise the “blast radius” of LLM-generated changes. And we’re continuously testing to get immediate feedback when the model breaks stuff. … Continue reading "The AI-Ready Software Developer #5 – Continuous Inspection"| Codemanship's Blog
Now, where were we? Ah, yes. So, we’re working in small steps with our LLM, solving one problem at a time, which makes it easier for the model to pay attention to important details (just like in real life). We’re keeping our contexts small, and making them more specific by clarifying with examples to reduce … Continue reading "The AI-Ready Software Developer #4 – Continuous Testing"| Codemanship's Blog
Calling back to my analogy of LLM context being like “cognitive load”, it’s now well understood why more context isn’t necessarily a good thing.Additional context that clari…| Codemanship's Blog
In communication studies at school, we were taught a simple way to gauge how well we’d understood what someone had told us: reflect it back with an example. “So what you’re saying is that if, for example, I had a pension pot of £250,000, I could take £62,500 tax-free and invest the rest in an … Continue reading "The AI-Ready Software Developer #2 – Clarifying With Examples"| Codemanship's Blog
Can we talk about separation of concerns and cognitive load? One thing about LLM coding assistants that’s very interesting is how they tend to crap out on code that has poor separation of concerns. Despite some pretty darn big advertised maximum context sizes (e.g., GPT-5 has 400K tokens), the effective maximum context size – beyond … Continue reading "The AI-Ready Software Developer #1 – Separation of Concerns"| Codemanship's Blog
Humans innovate, LLMs imitate.It’s really important to remember this if you’re building a business with or around the technology.Large Language Models perform most reliably (least unrel…| Codemanship's Blog
A view I share with a small but growing number of people is the idea that software releases are experiments. An experiment needs a hypothesis, and that hypothesis needs to be falsifiable – ot…| Codemanship's Blog
1 post published by codemanship during October 2025| Codemanship's Blog
An interesting piece of research was published recently that found that the effective maximum context size of Large Language Models is orders of magnitude smaller than the advertised maximum contex…| Codemanship's Blog
3 posts published by codemanship during July 2025| Codemanship's Blog
In my early 20s, I went to Inverness with a group of friends for a long weekend. On the looong journey up, we became obsessed with finding the Loch Ness monster, and ultimately had no fun when we w…| Codemanship's Blog
Founder of Codemanship Ltd and code craft coach and trainer| Codemanship's Blog
7 posts published by codemanship during September 2025| Codemanship's Blog
An effect that’s being more and more widely reported is the increase in time it’s taking developers to modify or fix code that was generated by Large Language Models. If you’ve wo…| Codemanship's Blog