
AI-powered coding tools are increasing the volume of software development, but new data from analytics firms shows that much of this output requires significant revision, raising questions about how productivity should be measured in AI-assisted engineering workflows.
Token Usage Metrics Clash With Output Based Productivity
Developers and managers have traditionally relied on metrics such as lines of code to measure productivity. With the rise of AI tools, some organizations have shifted toward tracking token usage, which reflects the amount of AI processing consumed.
However, measuring token budgets focuses on input rather than output. While higher token usage can indicate increased AI adoption, it does not directly reflect the quality or durability of generated code.
Analytics Firms Report High Initial Acceptance But Low Retention
Companies in the developer analytics space report that AI-generated code is often accepted initially but later revised. Alex Circei of Waydev said teams are seeing acceptance rates between 80% and 90% for AI-generated code.
Over time, however, revisions reduce the effective acceptance rate to between 10% and 30%, as developers revisit and modify earlier outputs.
Waydev, which works with around 50 customers employing more than 10,000 engineers, has updated its platform to track metadata from AI coding tools and provide insights into code quality and cost.
Industry Data Points To Increased Code Churn
Multiple firms report similar trends. GitClear found that regular AI users experienced 9.4 times higher code churn compared to non-AI users, exceeding the productivity gains observed.
Faros AI reported that code churn increased by 861% under high AI adoption, based on two years of customer data.
Jellyfish analyzed 7,548 engineers in the first quarter of 2026 and found that those with the largest token budgets produced the most pull requests. However, throughput gains did not scale proportionally, with approximately double the output achieved at ten times the token cost.
AI Tools Drive Volume While Efficiency Gains Remain Uneven
The data indicates that AI tools are increasing development activity, including code generation and pull requests, but not all of this output contributes to long-term progress.
Developers report increased workloads related to code review and technical debt, as more generated code requires validation and refinement.
Differences also appear across experience levels. Junior engineers tend to accept more AI-generated code, which can lead to higher levels of subsequent revision compared to senior developers.
Companies Continue Adoption Despite Ongoing Adjustments
Organizations are continuing to adopt AI coding tools while adjusting workflows to manage their impact. Atlassian acquired DX for $1 billion to help customers evaluate the return on investment of these tools.
Circei stated that companies view AI-assisted development as a long-term shift rather than a temporary trend, even as they refine how productivity and efficiency are measured.
Featured image credits: iTMunch
For more stories like it, click the +Follow button at the top of this page to follow us.
