This reminds me of the Microsoft articles back in the day on how Linux was so bad and slow for production servers and Windows overperformed it in all metrics.
I would be interested in seeing code churn levels of these PRs.
Measuring "number of pull requests" is about as insightful as measuring "number of lines of code". Would have hoped a company specializing in software developer tooling should know that ...
I wonder if cursor metrics are skewed as, I believe everyone tends to accept the wrong code and fix it later because if you need to keep fixing small chunks of code it becomes easier to type yourself.
thanks for the link. here's the core findings quoted from the paper, which to me reads as "yes, it produces slop fast":
Finding 1: The DiD models suggest that the adoption of Cursor only leads to a significant and large velocity gain in the short term (i.e., first two months) in open-source projects.
Finding 2: The DiD models suggest that the adoption of Cursor leads to a sustained accumulation of static analysis warnings and a sustained increase in code complexity.
Finding 3: The dynamic panel GMM models suggest that: (1) the adoption of Cursor leads to an inherently more complex codebase; (2) the accumulation of static analysis warnings and code complexity decreases development velocity in the future.
They tried, but in the long run reality wins.
reply