So, we’re working in small steps, solving one problem at a time. We’re clarifying with examples to reduce the risk of models grabbing the wrong end of the prompt stick. We’re cleanly separating concerns to localise the “blast radius” of LLM-generated changes. And we’re continuously testing to get immediate feedback when the model breaks stuff. … Continue reading "The AI-Ready Software Developer #5 – Continuous Inspection"| Codemanship's Blog
Can we talk about separation of concerns and cognitive load? One thing about LLM coding assistants that’s very interesting is how they tend to crap out on code that has poor separation of concerns. Despite some pretty darn big advertised maximum context sizes (e.g., GPT-5 has 400K tokens), the effective maximum context size – beyond … Continue reading "The AI-Ready Software Developer #1 – Separation of Concerns"| Codemanship's Blog