All these tools seem to be GitHub-centric. Any tips for teams using GitLab to store their repositories?
lukehoban 8 hours ago [-]
(Disclaimer: I work on coding agents at GitHub)
This data is great, and it is exciting to see the rapid growth of autonomous coding agents across GitHub.
One thing to keep in mind regarding merge rates is that each of these products creates the PR at a different phase of the work. So just tracking PR create to PR merge tells a different story for each product.
In some cases, the work to iterate on the AI generated code (and potentially abandon it if not sufficiently good) is done in private, and only pushed to a GitHub PR once the user decides they are ready to share/merge. This is the case for Codex for example. The merge rates for product experiences like this will look good in the stats presented here, even if many AI generated code changes are being abandoned privately.
For other product experiences, the Draft PR is generated immediately when a task is assigned, and users can iterate on this “in the open” with the coding agent. This creates more transparency into both the success and failure cases (including logs of the agent sessions for both). This is the case for GitHub Copilot coding agent for example. We believe this “learning in the open” is valuable for individuals, teams, and the industry. But it does lead to the merge rates reported here appearing worse - even if logically they are the same as “task assignment to merged PR” success rates for other tools.
We’re looking forward to continuing to evolve the notion of Draft PR to be even more natural for these use cases. And to enabling all of these coding agents to benefit from open collaboration on GitHub.
polskibus 4 hours ago [-]
What is your team’s take on the copyright for commits generated by ai agent ? Would the copyright protect it?
Current US stance seems to be:
https://www.copyright.gov/newsnet/2025/1060.html
“It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”.
If entire commit is generated by AI then it is obvious what created it - it’s AI. Such commit might not be covered by the law. Is this something your team has already analysed?
rustc 36 minutes ago [-]
> “It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”
How would that work if it's a patch to a project with a copyleft license like GPL which requires all derivate work to be licensed the same?
olq_plo 6 minutes ago [-]
IANAL, but it means the commit itself is public domain. When integrated into a code base with a more restrictive license, you can still use that isolated snippet in whatever way you want.
More interesting question is whether one could remove the GPL restrictions on public code by telling AI to rewrite the code from scratch, providing only the behavior of the code.
This could be accomplished by making AI generate a comprehensive test suite first, and then let it write the code of the app seeing only the test suite.
anticensor 33 minutes ago [-]
GPL is a copyright licence, not a ToS.
rustc 26 minutes ago [-]
> GPL is a copyright licence, not a ToS.
How is ToS relevant to this thread?
IanCal 1 hours ago [-]
>If entire commit is generated by AI then it is obvious what created it - it’s AI.
Whether it's committed or not is irrelevant to the conclusion there, the question is what was the input.
soamv 8 hours ago [-]
This is a great point! But there's an important tradeoff here about human engineering time versus the "learning in the open" benefits; a PR discarded privately consumes no human engineering time, a fact that the humans involved might appreciate. How do you balance that tradeoff? Is there such a thing as a diff that's "too bad" to iterate on with a human?
lukehoban 6 hours ago [-]
I do agree there is a balance here, and that the ideal point in the spectrum is likely in between the two product experiences that are currently being offered here. There are a lot of benefits to using PRs for the review and iteration - familiar diff UX, great comment/review feedback mechanisms, ability to run CI, visibility and auth tracked natively within GitHub, etc. But Draft PRs are also a little too visible by default in GitHub today, and there are times when you want a shareable PR link that isn't showing up by default on the Pull Requests list in GitHub for your repo. (I frankly want this even for human-authored Draft PRs, but its even more compelling for agent authored PRs).
We are looking into paths where we can support this more personal/private kind of PR, which would provide the foundation within GitHub to support the best of both worlds here.
ambicapter 8 hours ago [-]
Do people where you work spend time reviewing draft PRs? I wouldn’t do that unless asked to by the author.
drawnwren 7 hours ago [-]
It’s hard enough for me to get time to review actual PRs, who are these engineers trawling through the drafts?
osigurdson 8 hours ago [-]
I've been underwhelmed with dedicated tools like Windsurf and Cursor in the sense that they are usually more annoying than just using ChatGPT. They have their niche but they are just so incredibly flow destroying it is hard to use them for long periods of time.
I just started using Codex casually a few days ago though and already have 3 PRs. While different tools for different purposes make sense, Codex's fully async nature is so much nicer. It does simple things like improve consistency and make small improvements quite well which is really nice. Finally we have something that operates more like an appliance for a certain classes of problems. Previously it felt more like a teenager with a learners license.
elliotec 8 hours ago [-]
Have you tried Claude code? I’m surprised it’s not in this analysis but in my personal experience, the competition doesn’t even touch it. I’ve tried them all in earnest. My toolkit has been (neo)vim and tmux for at least a decade now so I understand the apprehension for less terminal-inclined folks that prefer other stuff but it’s my jam and just crushes it.
cap11235 7 hours ago [-]
Right, after the Sonnet 4 release it was the first time I could tell an agent something and just let it run comfortably. As for the tool itself, I think a large part of its ability comes from how it writes recursive todo-lists for itself, which are shown to the user, so you can intervene early on the occasions it goes full Monkey's Paw.
tough 51 minutes ago [-]
yeah i've been manually doing first a TASKS.md so i can modify it while the agent starts working on it.
wahnfrieden 2 hours ago [-]
On Mac I don’t like how chatgpt makes it difficult to have a few queries generating in parallel for my Xcode
deadbabe 8 hours ago [-]
You can just use Cursor as a chat assistant if you want.
threeseed 7 hours ago [-]
But then you're paying far more than just using Claude web which can be used for tasks other than coding.
deadbabe 7 hours ago [-]
Your company can be paying for it
koakuma-chan 6 hours ago [-]
How do I convince my company to pay for it?
roygbiv2 4 hours ago [-]
Tell them how much money they will save.
n4r9 1 hours ago [-]
... how much money will they save?
tough 50 minutes ago [-]
depends on how many employees they can fire thanks to the productivity gains!
ubj 5 hours ago [-]
Where is Claude Code? Surprised to see it completely left out of this analysis.
ukblewis 4 hours ago [-]
Claude Code isn’t a complete agent - it cannot open PRs autonomously AFAIK
mmaunder 3 hours ago [-]
Yeah it can. Either using MCP or git via bash. It’s a glaring omission and calls the data into question. How is attribution done? If it’s via the agent taking credit in commit messages, that’s a problem because Claude code, for example, has a config parameter that lets you tell it to not credit itself. With Claude code completely missing I’d say this is wildly inaccurate.
wordofx 3 hours ago [-]
The problem with Claude code is it doesn’t let you walk away. You have to press yes yes yes yes yes yes yes 500 times.
Glad it’s missing until they fix this.
HenriNext 2 hours ago [-]
It has fine grained permissions configuration file. And every permission question has three answer options: "yes", "yes and don't ask again", "no". And it has option '--dangerously-skip-permissions'. Out of all 20+ AI code tools I've tried/used, Claude Code has the best permission options.
mmaunder 2 hours ago [-]
Actually you can enable everything, or have fine grained control over specifics. Or you just manually approve and ask not to be prompted again. Sounds like you're more of a dabbler.
cap11235 3 hours ago [-]
Or you stick it in docker. Or actually configure your permissions. RTFM
mmaunder 2 hours ago [-]
Yeah sorry you got downvoted, but that's pretty much my inclination is to say RTFM. Honestly though, I'm very excited by how few developers are using the most powerful tools available. Huge opportunity for the rest of us willing to adapt to having our cheese moved, and willing to put in the work.
cap11235 19 minutes ago [-]
I do love the reaction to "here's a tool that can do everything when asked correctly" (ie, a compiler for arbitrary human artifacts), and then not read the manual. I remember a dude on this site complaining that 4o-mini only had superficial opinions of analyzing a particular poem, then it turns out the fellow didn't even supply the LLM with the text of the poem. Then the person's argument is that it is like criticizing someone for their hammer being 2.7mm off center. Utterly ridiculous; LLMs are not psychic, they just have approximate knowledge of many things. People seem to love setting them up to fail. My favorite "demonstration" is showing LLMs messing up doing multiplication of large numbers. If only the LLMs had access to some sort of machine that could do multiplication well...
paradite 3 hours ago [-]
It can open PR via GitHub actions integration. I just did:
If there is gh cli tool installed it will do it without any special prompts/requests/instructions no problem.
ainiriand 3 hours ago [-]
It is not an 'agent' in the sense that it is not really autonomous afaik.
HenriNext 1 hours ago [-]
Claude Code was not designed from the ground up to be only an autonomous agent, but it can certainly act as one.
- It has non-interactive CLI functionality (with -p "prompt" option) in addition to the default interactive TUI, making it easy to integrate to workflows.
- It has internal task tracking system that uses ReadTodo/WriteTodo tools to write JSON task lists to `$HOME/.claude/tasks/`, and enabling it to stay on track better than most other tools.
- It has excellent and customisable context compaction.
- And it has flexible permission system that can be used to turn all permissions questions to auto-accept when running in sandboxed environments.
Together those features enable it to be just as autonomous as any GitHub AI bot action hype thing (even though that might not have been its original or primary use).
cap11235 4 minutes ago [-]
Yeah, my primary usage pattern for it is purely autonomous for new feature development. I have Claude iterate on a prompt for itself a lot, asking me questions as it goes, then after, I can just say generic things like "Do the thing", "Continue", "Check the repo" and it does the thing, based on R/W Todo and my larger scale todo list for implementation. Also, Claude does have a github action (not that I've tried it though).
NiekvdMaas 5 hours ago [-]
Same for Google Jules
SilverSlash 2 hours ago [-]
Wasn't Codex only released recently? Why is it present an order of magnitude more than the others?
behnamoh 9 hours ago [-]
How about Google Jules?
also, of course OpenAI Codex would perform well because the tool is heavily tailored to this type of task, whereas Cursor is a more general-purpose (in the programming domain) tool/app.
tmvnty 9 hours ago [-]
Merge rates is definitely a useful signal, but there are certainly other factors we need consider (PR small/big edits, refactors vs deps upgrades, direct merges, follow up PRs correcting merged mistakes, how easy it is to setup these AI agents, marketing, usage fees etc). Similar to how NPM downloads alone don’t necessarily reflect a package’s true success or quality.
osigurdson 8 hours ago [-]
I suspect most are pretty small. But hey, that is fine as long as they are making code bases a bit better.
nojs 1 hours ago [-]
For people using these, is there an advantage to having the agent create PRs and reviewing these versus just iterating with Cursor/Claude Code locally before committing? It seems like additional bureaucracy and process when you could fix the errors sooner and closer to the source.
cap11235 1 hours ago [-]
Ignoring the issue of non-LLM team members, PR's are helpful if you are using GH issues as a memory mechanism, supposedly. That said, I don't bother if I don't have to. I have Claude commit automatically when it feels it made a change, then I curate things before I push (usually just squash).
dimitri-vs 9 hours ago [-]
This might be an obvious questions but why is Claude Code not included?
a_bonobo 8 hours ago [-]
I think the OP's page works because these coding agents identify themselves as the PR author so the creator can just search Github's issue tracker for things like is:pr+head:copilot or is:pr+head:codex
It seems like Claude Code doesn't do that? some preliminary searching reveals that PRs generated by people using Claude Code use their own user account but may sign that they used Claude, example https://github.com/anthropics/claude-code/pull/1732
cap11235 8 hours ago [-]
Claude does credit itself in the commit messages. eg:
feat: add progress bar for token probability calculation
- Add optional progress_cb parameter to get_token_probs function
- Integrate `rich` progress bar in CLI showing real-time token processing progress
- Add comprehensive tests for progress callback functionality
- Maintain backward compatibility with optional parameter
Instead of looking at the author of the PR, look for that 'Co-Authored-By: Claude' text bit.
That way I get 753 closed PRs and '1k' PRs in total, that's a pretty good acceptance rate.
ofou 7 hours ago [-]
I'd submit a PR with this idea to improve coverage of agents
csallen 9 hours ago [-]
I believe these are all "background" agents that, by default, are meant to write code and issue pull requests without you watching/babysitting/guiding the process. I haven't used Claude Code in a while, but from what I recall, it's not that.
cap11235 9 hours ago [-]
If you enable it in permissions, Claude is very happy to do so. For personal fun/experimental projects (usually I give it arXiv papers to implement), I generally have a couple Claude instances (on different projects) just chugging along all day. I have them write really detailed plans at the start (50-100 steps in the implementation plan, plus actual specifications for project structure, dev practices, and what the actual goals are). I iterate on these plan documents by having Claude write QUESTIONS.md which has dev questions for me to clarify, which I fill out with answers, and then instruct Claude to update the plan docs with my answers. Then most of my interaction throughout the day is just saying something like "P18" to implement implementation plan step #18. I instruct it in CLAUDE.md to stop after each step, output what automated tests have been written for P18's features, and I require that the LLM write a demo script that I can run that shows the features, using real APIs. I'm having a great time with it.
ilteris 9 hours ago [-]
How much do you pay monthly? What kind of service do you use thanks
cap11235 9 hours ago [-]
I'm on the $100 max plan. The default config uses Opus up until some percent of capacity, then uses Sonnet after, which resulted in my having to wait for 30 minutes to an hour to reset usage after running them for 8-10 hours. I've since switched to configuring it to only use Sonnet, then for what I know are "big" questions, I'll run opus for just that. Since then, I have yet to hit limits, so I don't feel the need for the $200 one.
unshavedyak 9 hours ago [-]
I really need to try giving it a $100 month. Really not sure it's worth it, but if i'm less concerned about throttling or cost it might be more fun, interesting, etc.
cap11235 8 hours ago [-]
It makes a psychological difference, yeah. I'm happy now just throwing any whim at it. For instance, I've been meaning for years to fix my disks, since every new computer just has me put the old drives into it, plus the new ones. Prior to the consolidation I had Claude do, oldest was from 2007 (good job, Western Digital 20 years ago). I had Claude write a plan on how to move files my most recent spinning disks, and also redo my mounts (for organization and improving my mount flags). I had it write the plan, I went "yeah", had it write a new fstab and a script to perform the moves in the project folder, had it "ultrathink" and web search a couple times to iterate on those for improvements it could suggest. Then I reviewed them, and had it apply the changes in a Claude instance with no automatic permissions beyond reading files in the project directory, so I manually approved the system modifications each.
furyofantares 8 hours ago [-]
There's also a soft cap of 50 sessions per month, right?
cap11235 8 hours ago [-]
Looks that way, but Anthropic docs vaguely say it is vague. I know I haven't hit any hard caps since only using opus manually, but I wouldn't know if I'm being throttled otherwise, or at least it isn't severe enough that I notice given they just churn in the background.
koakuma-chan 9 hours ago [-]
Claude Code can run in background and I don't see why it wouldn't be able to create pull requests if you gave it such a tool.
cap11235 9 hours ago [-]
The prompts in Claude Code have specific instructions on doing pull requests.
```
grep 'gh pr ' ~/.claude/local/node_modules/@anthropic-ai/claude-code/cli.js
- Create PR using gh pr create with the format below. Use a HEREDOC to pass the body to ensure correct formatting.
gh pr create --title "the pr title" --body "$(cat <<'EOF'
1. Use \`gh pr view --json number,headRepository\` to get the PR number and repository info
1. If no PR number is provided in the args, use ${O4.name}("gh pr list") to show open PRs
2. If a PR number is provided, use ${O4.name}("gh pr view <number>") to get PR details
3. Use ${O4.name}("gh pr diff <number>") to get the diff
```
wordofx 3 hours ago [-]
How to tell CC to accept everything. It’s frustrating to need to press yes yes yes yes.
It is also worth looking at the number of unique repositories for each agent, or the number of unique large repositories (e.g., by the threshold on the number of stars). Here is the report we can check:
I've also added some less popular agents like jetbrains-junie, and added a link to a random pull request for each agent, so we can look at the example PRs.
pkongz 3 hours ago [-]
How does this analysis handle potential false positives? For instance, if a user coincidentally names their branch `codex/my-branch`, would it be incorrectly included in the "Codex" statistics?
ehsanu1 5 hours ago [-]
It's hard to attribute PR merge rate with higher tool quality here. Another likely reason is level of complexity of task. Just looking at the first PR I saw from the github search for codex PRs, it was this one-line change that any tool, even years ago, could have easily accomplished: https://github.com/maruyamamasaya/yasukaribike/pull/20/files
m4r1k 1 hours ago [-]
Just curious, why is there no reference to Google?
knes 3 hours ago [-]
This is great work. Would love to see Augmentcode.com remote agent. If you are down OP, msg and I'll give you a free subscription to add to the test
myhandleisbest 3 hours ago [-]
Can I get a clarification on the data here - Are these PRs reviewed by the tools or fully authored?
Also filter conditions that would be interesting - size of PR, language, files affected, distinct organizations etc. lmk if these get added please!
nikolayasdf123 5 hours ago [-]
yeah, GitHub Copilot PRs are unusable. from personal experience
selvan 7 hours ago [-]
Total PRs between Codex vs Cursor is 208K vs 705, this is an enormous difference in absolute PRs. Since cursor is very popular, how does their PRs is not even 1% of codex PRs?.
ezyang 6 hours ago [-]
The happy path way of getting code out of Codex is a PR. This is emphatically not true for Cursor.
cap11235 6 hours ago [-]
Feels like a sort of pollution.
falcor84 3 minutes ago [-]
Why? That is its intent - unlike an IDE, it is intended to work autonomously and only get back to you after it has prepared the full changeset - which at that point you'd review via a PR. Where's the pollution in that?
rahimnathwani 6 hours ago [-]
I didn't even realize Cursor could make PRs. I thought most people would create PRs themselves once they were happy with a series of commits.
cap11235 2 hours ago [-]
At least you know that Codex knows how to advertise itself, if nothing else.
throwaway314155 8 hours ago [-]
Is this data not somewhat tainted by the fact that there's really zero way to identify how much a human was or wasn't "in the loop" before the PR was created?
thorum 7 hours ago [-]
With Jules, I almost always end up making significant changes before approving the PR. So “successful merge” is not great indicator of how well the model did in my case. I’ve merged PRs that were initially terrible after going in and fixing all the mistakes.
tptacek 8 hours ago [-]
I kind of wondered about that re: Devin vs. Cursor, because the people I know that happen to use Devin are also very hands-on with the code they end up merging.
But you could probably filter this a bit by looking at PR commit counts?
frognumber 10 hours ago [-]
Missing data: I don't make a codex PR if it's nonsense.
Poor data: If I make one, I either if I want to:
a) Merge it (success)
b) Modify it (sometimes success, sometimes not). In one case, Codex made the wrong changes in all the right places, but it was still easier to work from that by hand.
c) Pick ideas from it (partial success)
So simple merge rates don't say much.
osigurdson 8 hours ago [-]
It isn't so much "poor" data as it is a fairly high bar for value generation. If it gets merged it is a fairly clear indicator that some value is created. If it doesn't get merged then it may be adding some value or it may not.
pryelluw 7 hours ago [-]
Is it me or are there a lot of documentation related PRs? Not a majority, but enough to mask the impact of agent code.
cjbarber 8 hours ago [-]
Seems like the high order bit impacting results here might be how difficult the PR is?
myhandleisbest 3 hours ago [-]
Stats? What about the vibes leaderboard?
falcor84 1 minutes ago [-]
Which one?
m3kw9 5 hours ago [-]
Agents should also sign the pr with secret keys so people can’t just fake the commit message
zekone 7 hours ago [-]
thanks for posting my project bradda
TZubiri 8 hours ago [-]
Why is there 170k PR for a product released last month, but 700 for a product that has been around for like 6 months and was so popular it got acquired for 3B?
SatvikBeri 4 hours ago [-]
I've used Cursor for months and didn't even realize you could make PRs from it. It's not really part of the default workflow.
simoncion 7 hours ago [-]
It might be the case that "number of PRs" is roughly as good a metric as "number of lines of code produced".
This data is great, and it is exciting to see the rapid growth of autonomous coding agents across GitHub.
One thing to keep in mind regarding merge rates is that each of these products creates the PR at a different phase of the work. So just tracking PR create to PR merge tells a different story for each product.
In some cases, the work to iterate on the AI generated code (and potentially abandon it if not sufficiently good) is done in private, and only pushed to a GitHub PR once the user decides they are ready to share/merge. This is the case for Codex for example. The merge rates for product experiences like this will look good in the stats presented here, even if many AI generated code changes are being abandoned privately.
For other product experiences, the Draft PR is generated immediately when a task is assigned, and users can iterate on this “in the open” with the coding agent. This creates more transparency into both the success and failure cases (including logs of the agent sessions for both). This is the case for GitHub Copilot coding agent for example. We believe this “learning in the open” is valuable for individuals, teams, and the industry. But it does lead to the merge rates reported here appearing worse - even if logically they are the same as “task assignment to merged PR” success rates for other tools.
We’re looking forward to continuing to evolve the notion of Draft PR to be even more natural for these use cases. And to enabling all of these coding agents to benefit from open collaboration on GitHub.
Current US stance seems to be: https://www.copyright.gov/newsnet/2025/1060.html “It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”.
If entire commit is generated by AI then it is obvious what created it - it’s AI. Such commit might not be covered by the law. Is this something your team has already analysed?
How would that work if it's a patch to a project with a copyleft license like GPL which requires all derivate work to be licensed the same?
More interesting question is whether one could remove the GPL restrictions on public code by telling AI to rewrite the code from scratch, providing only the behavior of the code.
This could be accomplished by making AI generate a comprehensive test suite first, and then let it write the code of the app seeing only the test suite.
How is ToS relevant to this thread?
Whether it's committed or not is irrelevant to the conclusion there, the question is what was the input.
We are looking into paths where we can support this more personal/private kind of PR, which would provide the foundation within GitHub to support the best of both worlds here.
I just started using Codex casually a few days ago though and already have 3 PRs. While different tools for different purposes make sense, Codex's fully async nature is so much nicer. It does simple things like improve consistency and make small improvements quite well which is really nice. Finally we have something that operates more like an appliance for a certain classes of problems. Previously it felt more like a teenager with a learners license.
Glad it’s missing until they fix this.
https://x.com/paradite_/status/1931644656762429503
Docs: https://docs.anthropic.com/en/docs/claude-code/github-action...
- It has non-interactive CLI functionality (with -p "prompt" option) in addition to the default interactive TUI, making it easy to integrate to workflows.
- It has turn-key GitHub integration (https://github.com/anthropics/claude-code-action).
- It has internal task tracking system that uses ReadTodo/WriteTodo tools to write JSON task lists to `$HOME/.claude/tasks/`, and enabling it to stay on track better than most other tools.
- It has excellent and customisable context compaction.
- And it has flexible permission system that can be used to turn all permissions questions to auto-accept when running in sandboxed environments.
Together those features enable it to be just as autonomous as any GitHub AI bot action hype thing (even though that might not have been its original or primary use).
also, of course OpenAI Codex would perform well because the tool is heavily tailored to this type of task, whereas Cursor is a more general-purpose (in the programming domain) tool/app.
It seems like Claude Code doesn't do that? some preliminary searching reveals that PRs generated by people using Claude Code use their own user account but may sign that they used Claude, example https://github.com/anthropics/claude-code/pull/1732
feat: add progress bar for token probability calculation
- Add optional progress_cb parameter to get_token_probs function
- Integrate `rich` progress bar in CLI showing real-time token processing progress
- Add comprehensive tests for progress callback functionality
- Maintain backward compatibility with optional parameter
Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
https://github.com/search?q=is:pr+is:merged+Co-Authored-By:+...
Instead of looking at the author of the PR, look for that 'Co-Authored-By: Claude' text bit.
That way I get 753 closed PRs and '1k' PRs in total, that's a pretty good acceptance rate.
``` grep 'gh pr ' ~/.claude/local/node_modules/@anthropic-ai/claude-code/cli.js - Create PR using gh pr create with the format below. Use a HEREDOC to pass the body to ensure correct formatting. gh pr create --title "the pr title" --body "$(cat <<'EOF' 1. Use \`gh pr view --json number,headRepository\` to get the PR number and repository info 1. If no PR number is provided in the args, use ${O4.name}("gh pr list") to show open PRs 2. If a PR number is provided, use ${O4.name}("gh pr view <number>") to get PR details 3. Use ${O4.name}("gh pr diff <number>") to get the diff ```
https://play.clickhouse.com/play?user=play#V0lUSCByZXBvX3N0Y...
I've also added some less popular agents like jetbrains-junie, and added a link to a random pull request for each agent, so we can look at the example PRs.
Also filter conditions that would be interesting - size of PR, language, files affected, distinct organizations etc. lmk if these get added please!
But you could probably filter this a bit by looking at PR commit counts?
Poor data: If I make one, I either if I want to:
a) Merge it (success)
b) Modify it (sometimes success, sometimes not). In one case, Codex made the wrong changes in all the right places, but it was still easier to work from that by hand.
c) Pick ideas from it (partial success)
So simple merge rates don't say much.