Few things give me more dread than reviewing the mediocre code written by an overconfident LLM, but arguing in a PR with an overconfident LLM that its review comments are wrong is up there.
I can’t agree more. I’m torn on LLM code reviews. On the one hand I think it is a place that makes a lot of sense and they can quickly catch silly human errors like misspelled variables and whatnot.
On the other hand the amount of flip flopping they go through is unreal. I’ve witnessed numerous instances where either the cursor bugbot or Claude has found a bug and recommended a reasonable fix. The fix has been implemented and then the LLM has argued the case against the fix and requested the code be reverted. Out of curiosity to see what happens I’ve reverted the code just to be told the exact same recommendation as in the first pass.
I can foresee this becoming a circus for less experienced devs so I turned off the auto code reviews and stuck them in request only mode with a GH action so that I can retain some semblance of sanity and prevent the pr comment history from becoming cluttered with overly verbose comments from an agent.
The purpose of these reviewers is to flag the bug to you. You still need to read the code around and see if its valid and serious and worth a fix. Why does it matter if it then says the opposite after the fix? Did it even happen often or is this just an anecdote of a one time thing?
Do you have actual experience with bugbot? Its live in our org and is actually pretty good, almost none of its comments are frivolous or wrong, and it finds genuine bugs most reviewers miss. This is unlike Graphite and Copilot, so no one's glazing AI for AIs sake.
Bugbot is now a valuable part of our SD process. If you have genuine examples to show that we are just being delusional or haven’t hit a roadblock, I would love to know.
I assume that this is the same as when Cursor spontaneously decides to show code review comments in the IDE as part of some upsell? In that case yes I’m familiar and they were all subtly wrong.
It absolutely isn't so ill request you to tone it down with the blind AI hate.
Wait, so Cursor has multiple code review products? I dunno man, if they market the bad one at me and don’t tell me about the good one then I don’t think that’s my fault.
The battle I am fighting at the moment is that our glorious engineering team, who are the lowest bidding external outsourcer, make the LLM spew look pretty good. The reality of course is they are both terrible, but no one wants to hear that, only that the LLM is better than the humans. And that's only because it's the narrative they need to maintain.
Relative quality is better but the absolute quality is not. I only care about absolute quality.
The biggest problem with LLM reviews for me is not false positives, but authority. Younger devs are used to accepting bot comments as the ultimate truth, even when they are clearly questionable
Yes, I've found some really interesting bugs using LLM feedback, but it's about a 40% accuracy rate, mostly when it's highlighting things that are noncritical (for example, we don't need to worry about portability in a single architecture app that runs on a specific OS)
Few things give me more dread than reviewing the mediocre code written by an overconfident LLM, but arguing in a PR with an overconfident LLM that its review comments are wrong is up there.
I can’t agree more. I’m torn on LLM code reviews. On the one hand I think it is a place that makes a lot of sense and they can quickly catch silly human errors like misspelled variables and whatnot.
On the other hand the amount of flip flopping they go through is unreal. I’ve witnessed numerous instances where either the cursor bugbot or Claude has found a bug and recommended a reasonable fix. The fix has been implemented and then the LLM has argued the case against the fix and requested the code be reverted. Out of curiosity to see what happens I’ve reverted the code just to be told the exact same recommendation as in the first pass.
I can foresee this becoming a circus for less experienced devs so I turned off the auto code reviews and stuck them in request only mode with a GH action so that I can retain some semblance of sanity and prevent the pr comment history from becoming cluttered with overly verbose comments from an agent.
The purpose of these reviewers is to flag the bug to you. You still need to read the code around and see if its valid and serious and worth a fix. Why does it matter if it then says the opposite after the fix? Did it even happen often or is this just an anecdote of a one time thing?
Do you have actual experience with bugbot? Its live in our org and is actually pretty good, almost none of its comments are frivolous or wrong, and it finds genuine bugs most reviewers miss. This is unlike Graphite and Copilot, so no one's glazing AI for AIs sake.
Bugbot is now a valuable part of our SD process. If you have genuine examples to show that we are just being delusional or haven’t hit a roadblock, I would love to know.
I assume that this is the same as when Cursor spontaneously decides to show code review comments in the IDE as part of some upsell? In that case yes I’m familiar and they were all subtly wrong.
It absolutely isn't so ill request you to tone it down with the blind AI hate.
Wait, so Cursor has multiple code review products? I dunno man, if they market the bad one at me and don’t tell me about the good one then I don’t think that’s my fault.
The battle I am fighting at the moment is that our glorious engineering team, who are the lowest bidding external outsourcer, make the LLM spew look pretty good. The reality of course is they are both terrible, but no one wants to hear that, only that the LLM is better than the humans. And that's only because it's the narrative they need to maintain.
Relative quality is better but the absolute quality is not. I only care about absolute quality.
The biggest problem with LLM reviews for me is not false positives, but authority. Younger devs are used to accepting bot comments as the ultimate truth, even when they are clearly questionable
Yes, I've found some really interesting bugs using LLM feedback, but it's about a 40% accuracy rate, mostly when it's highlighting things that are noncritical (for example, we don't need to worry about portability in a single architecture app that runs on a specific OS)