Study Reveals Surprising Truth About AI-Assisted Software Development

A recent study challenges the common belief that AI tools accelerate software development by testing experienced open-source developers with AI-enhanced tools like Claude 3.5 and Cursor Pro. Contrary to expectations, researchers found a 19% increase in task completion time despite developers believing they were working faster.

The randomized controlled trial, conducted at METR, used production-grade environments and real tasks from mature open-source repositories. 16 professional developers with an average of five years of experience worked on realistic issues from their own codebases. The study collected both objective and subjective metrics, including task duration, code quality, and developer perception.

The results showed that AI-assisted developers took 19% longer to complete tasks than those without AI, contradicting the predicted 40% speedup. The authors attributed this slowdown to factors such as prompting, reviewing AI-generated suggestions, and integrating outputs with complex codebases. They identified five key contributors to the slowdown, revealing a “perception gap” between users’ expectations and actual productivity.

The study emphasizes the importance of grounding AI tool evaluation in rigorous measurement rather than relying solely on user sentiment. While future systems may overcome the challenges observed here, the authors frame their findings as part of a fast-evolving landscape that still requires real-world evaluation.

Source: https://www.infoq.com/news/2025/07/ai-productivity