every bullet hole in that plane is the 1k PRs contributed by copilot. The missing dots, and whole missing planes, are unaccounted for. Ie, "ai ruined my morning"
It's not survivorship bias. Survivorship bias would be if you made any conclusions from the 1000 merged PRs (eg. "90% of all merged PRs did not get reverted"). But simply stating the number of PRs is not that.
I'd love to know where you think the starting position of the goal posts was.
Everyone who has used AI coding tools interactively or as agents knows they're unpredictably hit or miss. The old, non-agent Copilot has a dashboard that shows org-wide rejection rates for for paying customers. I'm curious to learn what the equivalent rejection-rate for the agent is for the people who make the thing.
I think the implied promise of the technology, that it is capable of fundamentally changing organizations relationships with code and software engineering, deserves deep understanding. Companies will be making multi million dollar decisions based on their belief in its efficacy
When someone says that the number given is not high enough. I wouldn't consider trying to get an understanding of PR acceptance rate before and after Copilot to be moving the goal posts. Using raw numbers instead of percentages is often done to emphasize a narrative rather than simply inform (e.g. "Dow plummets x points" rather than "Dow lost 1.5%").
Sometimes there are some paradigms shift in the dependency that get past the current tests you have. So it’s always good to read the changelog and plan the update accordingly.
I'm curious to know how many Copilot PRs were not merged and/or required human take-overs.