There's a lot of noise about AI-assisted development — claims that it will make developers ten times more productive, concerns that it will replace them entirely, and a lot of content that clearly hasn't come from anyone who's actually used it in production. This is an account from someone who has.
Over the last eighteen months we built four production products using AI-assisted engineering as a central part of the workflow: a consultancy website, a French language learning app, a shared-cost splitting mobile application, and a barber shop booking system. Different technology stacks, different domains, different complexity levels. Here's what we learned.
Where AI genuinely accelerates development
Scaffolding and boilerplate. The first 20% of any project — project setup, configuration, boilerplate structure, build tooling — used to take days. With AI assistance it takes hours. This isn't just speed; it's the reduction of the friction that makes starting a new project feel heavy.
Crossing knowledge gaps. Every project requires working in areas where you're not an expert. CSS animations you've never written before. A third-party API you've never integrated. A mobile navigation pattern you've implemented once, three years ago. AI assistance dramatically reduces the cost of these context switches. You can ask a specific question and get a specific answer in the context of your actual code, rather than spending an hour on documentation and Stack Overflow.
First-draft generation for repetitive structures. When you're building a data model with fifteen similar entities, or writing unit tests for a service with eight methods, or creating template variants with consistent structure, AI can produce the first draft in seconds. You still need to review, adapt, and often substantially rewrite it. But starting from something is faster than starting from nothing.
Where it wastes your time
Complex, stateful problems. AI assistance works well for problems that can be solved in a defined context. When the problem requires understanding of accumulated state across many files, architectural decisions made months ago, and the particular way this codebase has evolved — it struggles. The suggestions become plausible-sounding but wrong, and unpicking why they're wrong takes longer than solving the problem directly.
Security-sensitive code. AI-generated code passes the syntax and logic checks but can introduce subtle security issues: inadequate input validation, insecure defaults, missing rate limiting, CSRF exposure. We caught several of these in code review. Treat AI-generated code in security-sensitive areas with the same scrutiny you'd apply to code from a junior developer you don't know well.
When you're not sure what you're asking for. AI assistance is a force multiplier, not a substitute for clear thinking. If you don't know what you're trying to build, generating code faster doesn't help — it accelerates confusion. The clearer your specification, the more useful the output.
What it changed about how we work
The most significant change wasn't speed — it was the ability for a small team to cover more ground technically. A two-person team can now maintain a mobile app, a web application, a backend API, and the supporting infrastructure without each person being a specialist in every layer. The AI fills some of the knowledge gaps that would previously have required hiring specialists.
Code review became more important, not less. When code is generated quickly, the review discipline is the thing that prevents the accumulated technical debt that comes from accepting plausible-but-wrong suggestions. We tightened our review standards specifically because of AI assistance, not despite it.
The economics changed for certain types of project. A product that would previously have required three months and a team of five to validate as an idea can now be built to a testable state in three to four weeks by two people. That changes what's worth attempting.
The honest assessment
AI-assisted development is genuinely useful. It is not a replacement for engineering judgement, architectural thinking, or code review rigour. It raises the floor significantly — tasks that were previously blocked by knowledge gaps or tedious boilerplate become tractable. It does not raise the ceiling; the hardest engineering problems are still hard.
The engineers who benefit most from it are those who treat it as a tool that requires verification, not an oracle that produces correct answers. Used that way, it's one of the more significant productivity changes in the last decade of software development.
Back to Insights