AI coding instruments could not pace up each developer, research exhibits
Software program engineer workflows have been reworked lately by an inflow of AI coding instruments like Cursor and GitHub Copilot, which promise to boost productiveness by routinely writing traces of code, fixing bugs, and testing adjustments. The instruments are powered by AI fashions from OpenAI, Google DeepMind, Anthropic, and xAI which have quickly elevated their efficiency on a variety of software program engineering assessments lately.
Nevertheless, a brand new research revealed Thursday by the non-profit AI analysis group METR calls into query the extent to which in the present day’s AI coding instruments improve productiveness for knowledgeable builders.
METR performed a randomized managed trial for this research by recruiting 16 skilled open supply builders and having them full 246 actual duties on giant code repositories they often contribute to. The researchers randomly assigned roughly half of these duties as “AI-allowed,” giving builders permission to make use of state-of-the-art AI coding instruments corresponding to Cursor Professional, whereas the opposite half of duties forbade the usage of AI instruments.
Earlier than finishing their assigned duties, the builders forecasted that utilizing AI coding instruments would scale back their completion time by 24%. That wasn’t the case.
“Surprisingly, we discover that permitting AI really will increase completion time by 19% — builders are slower when utilizing AI tooling,” the researchers stated.
Notably, solely 56% of the builders within the research had expertise utilizing Cursor, the principle AI software supplied within the research. Whereas almost all of the builders (94%) had expertise utilizing some web-based LLMs of their coding workflows, this research was the primary time some used Cursor particularly. The researchers word that builders had been educated on utilizing Cursor in preparation for the research.
Nonetheless, METR’s findings increase questions in regards to the supposed common productiveness good points promised by AI coding instruments in 2025. Primarily based on the research, builders shouldn’t assume that AI coding instruments — particularly what’s come to be often known as “vibe coders” — will instantly pace up their workflows.
METR researchers level to a couple potential explanation why AI slowed down builders slightly than dashing them up: Builders spend way more time prompting AI and ready for it to reply when utilizing vibe coders slightly than really coding. AI additionally tends to wrestle in giant, advanced code bases, which this take a look at used.
The research’s authors are cautious not to attract any robust conclusions from these findings, explicitly noting they don’t consider AI techniques at the moment fail to hurry up many or most software program builders. Different large-scale research have proven that AI coding instruments do pace up software program engineer workflows.
The authors additionally word that AI progress has been substantial lately and that they wouldn’t count on the identical outcomes even three months from now. METR has additionally discovered that AI coding instruments have considerably improved their means to finish advanced, long-horizon duties lately.
Nevertheless, the analysis affords but one more reason to be skeptical of the promised good points of AI coding instruments. Different research have proven that in the present day’s AI coding instruments can introduce errors and, in some instances, safety vulnerabilities.