AI Coding Tools Fail to Deliver on Efficiency and Code Quality

A recent study by Uplevel found that GitHub Copilot, one of the most popular AI coding assistants, has largely remained unchanged in its impact on developer efficiency and code quality.

The investigation analyzed the performance of 800 developers using GitHub Copilot, comparing it to a group without access to the tool. The results showed no significant change in efficiency metrics such as cycle time, pull request throughput, bug rate, or extended working hours.

In fact, Uplevel found that Copilot-assisted code contained more errors and had higher bug rates, contradicting claims made by GitHub that AI coding tools increase code quality. The study also revealed that the use of Copilot did not mitigate developer burnout, with no significant decrease in “always on” time for those using the tool.

The findings challenge perceptions around the benefits of generative AI coding assistants and raise questions over their true impact on software development. Research suggests that while there is still a growing appetite for AI coding tools, it remains to be seen whether these solutions are producing tangible value in the industry.

Source: https://www.itpro.com/software/development/github-says-copilot-improves-code-quality-but-are-ai-coding-tools-actually-producing-results-for-developers