32
Ask HN: What is a common PR review time at your company?
I find myself frequently frustrated by waiting for PR reviews. There is tons of writing on the internet about short review times being better for productivity, and lots of tips-n-tricks for how to reduce PR review time.
I've started to consider that my expectations are just wrong. I'm curious what other experiences are like. What is an average duration range for your PRs to get approved or changes requested? I'm totally fine with anecdotal responses.
As a reviewer... it really, really depends on how trivial or gnarly the PR is.
If it's a simple change that is obviously correct, I'll try to unblock the author ASAP - often within minutes, even if it interrupts my flow.
If it's a giant PR with lots of risky changes in vital code, an awkward-to-unreadable diff, and/or maddeningly questionable design/coding decisions that require me to think a lot and compose some nuanced, reasoned comments to steer you in a better direction, then, well, you might find yourself nagging me for a review 2 days later. (And I'll probably ask you to split up PRs like this in future, e.g. to separate major refactoring from any logic changes.)
This x1000. It's not about the org, really. It's about what's being changed.
Yes, a lot of large codebases have things that look wrong. They're not always wrong. Trying to clean up what look like cobwebs at the core of a large codebases often means removing things that are there for a reason. The reason is usually counterintuitive and could be documented better. But often only a few people can really evaluate the change.
A small PR is likely to be accepted quickly, and a large PR is likely to take awhile. No one gets upset by that, though.
The flip side is that a small PR to a critical part of the codebase is also likely to take awhile and be treated as "default to no". It's often hard to see that from the "outside", though.
With that said, trying to go the extra mile and make things clear, concise, and better documented than before goes a long way in getting an MR reviewed quickly.
Here's what no one will tell you about PR reviews. Highly productive engineers often learn to work around them. Once you've been around for a while, you gain trust. This usually results in a group of people who will rubber stamp anything you send them.
I've been doing this for 20 years and I've never seen an organization where this didn't exist to a significant degree.
I review my PRs myself line by line. And because of this I've literally never had a PR reviewer catch a major bug (a bug that would have required declaring an incident and paging an on call engineer had it made it through to prod). Not a single time.
So the vast majority of the time, I don't care that someone is going to rubber-stamp my PR. Occasionally, I'll ask someone who knows more than me to give it a thorough going over just in case.
I also very rarely have someone suggest a change in PR that was worth bringing up. But most of the time if I'm doing anything non-trivial, I've already talked it over with someone else well before PR time. I think that is the real time when collaboration should be happening, not at the end of the process.
Essentially, the PR a process at most tech companies is nothing more than a gatekeeping/complicance/CYA mechanism, and realizing this can definitely help your career..
If you are never getting good suggestions on your PRs that's a bad sign. Any team of more than 2 people should have some ideas sometimes for each other. Either this means everybody's too checked out to put in effort on PRs or they think it'll fall on deaf ears.
I've been a software engineer for decades and even so, teammates will have good ideas sometimes. Nobody can think of every good idea every time.
I said very rarely not never. I classify suggested changes in 2 flavors. The first is minor changes where someone suggests "hey this would be easier to read if you used syntax A vs syntax B here".
I get those frequently, and they are usually reasonable suggestions, and I usually graciously accept them. But I say it's not worth bringing up in a PR because 99% of the time it doesn't actually matter in the long run. Both forms will have been used in the code base previously, and which one you think is better really comes down to which one you use more. It's the kind of thing you could change with a sufficiently opinionated linter. And the kind of thing that isn't worth the cost of mandatory PR reviews.
The 2nd is where someone suggests changes to overall architecture or changes to the way the feature or code works large scale.
These are much rarer because of several reasons
1. PR reviews are almost always surface level and so are more likely to catch category 1 than category 2. The incentives at nearly every tech company push for this. 2. Very frequently one person isn't available to review all of the PRs that go into a feature, so the reviewers lack context. 3. It's very unlikely that even if someone wants to dig deeper, that they have the free time to spend even 1% of the time the person who wrote the PR spent on it.
But the biggest reason I personally don't get many of the 2nd category is because I talk through non-trivial problems with other experienced engineers before I get to the PR stage.
You're missing a third category which is "here's another way you could do this..." which isn't just about legibility but more tangible tradeoffs.
Certainly I agree architectural decisions shouldn't be made in every PR and ideally should be discussed before that point.
I'd consider that to fall under my category 2
"changes to overall architecture or changes to the way the feature or code works large scale"
I'm not saying there's never a reason to go back and redesign something after a PR review, but in my mind getting a design to that point and then actually needing to change it is a huge failure.
Far more common the case where someone wants a big design change with no tangible benefits just different tradeoffs.
I just don't think the ocassional benefit is worth the cost of the process.
You're lumping big changes with small changes. If you really won't go back and change one function or one class because someone shared a better idea at the PR stage that's unhealthy and you will improve by letting go of that.
I never said I won't go back and change a single function. And I never said I wouldn't change things larger than that if asked. I would lump small requests like that in with category 1 (of which a syntax change wasn't meant to an exhaustive example).
What I said was those kind of requests usually aren't meaningful or impactful long term. They very very rarely make or save anyone a single dollar. Let alone enough money to justify the time spent on the review process.
If you've already spent the time to suggest the change, if it's slightly better, sure I'll make it. Even if it's not any better, as long as it's not worse, if you feel strongly about it, there's a good chance I'll go along. I just don't think the process has a positive ROI, and I've yet to see any data to convince me otherwise.
The problem is the good ideas should come before or while you start coding. By the time the PR is done isn't the right time to rewrite it.
What? This is absolutely incorrect. There's be no point in PRs if it's too late to change anything. Part of the point is to reduce the number of synchronous discussions your team has to have about code before writing it. PRs let you iterate on actual code instead of endlessly discussing hypothetical implementations.
This is a huge pet peeve for me.
You’d kill your teams velocity if you did this for every PR.
I worked on a team once with… Perfectionist Petra. Petra would jump on an 1200 line refactor that was blocking the team and demand changes that would require it to be rewritten. Not changes that would save the code base from grievous error: Petra didn’t like the pattern used or didn’t approve of the design.
Sufficient tests? Check. Linted and formatted? Check. Does Petra approve? Big variable. I often wanted to tell Petra if they could just write the code out for me in a ticket so I could copy it for them. Instead I had to try and read Petra’s mind or hope they wouldn’t jump on my PR.
Sometimes you have to trust your teammate and not let the perfect plan interfere with a good enough one.
I'm not advocating for constant nitpicking or demanding perfection. But somewhere between that and "PRs are too late for changes" is where most good teams operate.
I realize my comment was emotionally charged.
The point was that I'm a planner. I tend to discuss designs long before I begin writing code. By the time I get to a PR I don't expect to get feedback about the design. I'm looking for someone to check that the code is acceptable to merge. If you have problems with the design, it is too late!
However I work with folks who are not planners. They will write a PR as a proposal. Their expectation is that they may need to iterate on the code a bit based on that feedback; even the design and approach itself is up for review. If you have problems with the design, it's fine!
What has worked well for me in those situations in the past is to be up-front about what kind of feedback you're seeking. Mention in the PR description that you're looking for design crit, code review, etc. Whatever lingo works for your team.
Update: Realizing at the last minute that the design itself is wrong, and saying so, is a good thing -- and I do appreciate that. However it should be rare. My pet peeve is more with the folks who nitpick... whose design contributions are superficial and concerned with style more than substance.
I don't disagree some people have bad, overly nitpicky PR habits and they should be trained out of that.
That said, sitting down before hand and planning is useful for the big picture but one of the big benefits of PRs, as I explained earlier, is to eliminate lots of wasted time going back and forth in meetings about how to implement something and instead empowering engineers to go write code and then to iterate on actual implementations instead of endlessly discussing ahead of time. So there is a balance there where you make sure everyone is aligned on what you're building but also being open to suggestions on your PRs. Sometimes that may mean rewriting large chunks in exchange for drastically reducing pre-coding discussions.
I agree that balance is necessary and nitpickers should be steered away from that behaviour.
I think I'm more concerned with nitpickers which show up in different flavours.
I understand a functional requirement to rewrite a piece of code in a PR. Perhaps you spent the last 8 hours coding up a solution that passes all of the tests but if you use that data structure at scale it wouldn't perform acceptably. That's a rewrite and that sucks but it will be for the better.
OPP mentioned that this is happening but in discussion prior to PR. Makes sense to me.
I addressed that in the comment you are responding to...
>Essentially, the PR a process at most tech companies is nothing more than a gatekeeping/complicance/CYA mechanism, and realizing this can definitely help your career..
It has value particularly when the team has newcomers or less-experienced developers. Writing it off as 'CYA' discards an opportunity for async feedback to be given on work.
Sure that’s fine when you’re the one doing the review for a newcomer or less experienced dev.
However, I think that PR reviews after the code has been written are the absolute worst time to do this.
I completely disagree. By writing the code, the author gains understanding of the hidden details of the problem. The reviewer gets to explore a full solution that accounts for those details, and provides tests. With the context still fresh, it's a great time for a new pair of eyes to explore, ask questions (which should lead to clarifications in the code), and think of ways to simplify and untangle the solution further while keeping the tests passing. In fact, it can take more time to keep assuming and discussing what the solution might be than to write the code and iterate on it. In an asynchronous/remote work environment, iterating on PRs is not only okay, but probably the most effective way to collaborate.
I'm not saying I never fire up a draft PR so that I can nail some things down and get comments on it.
But at the same time not everything needs to be asynchronous and if you try to make everything asynchronous, such that you never have ad hoc meetings or synchronous conversations, you're going to move much slower than is necessary. I've been exclusively remote for 10 years now, so I love doing thing async when I can, but I recognize the limits.
By the time my PR is ready to merge the assumption should be that it's ready to merge, unless you see something actually wrong. Not just hey this could be a little easier to reason about if you completely redesigned your solution.
9/10 when people do make those kinds of suggestions it's not objectively better, and 9/10 changing it won't move the needle with respect to maintainability or meaningful performance.
So given that we have a funnel of reviewers where most people are only going to do surface level reviews, of those that do deep dives most of them aren't going to find any problems, of those that do most of the solutions aren't going to make any kind of significant impact. Is it worth going through this whole process? My conclusion is that no it is not.
Comparing the software we produced before the whole PR review for every merge request thing became common, and the software we produce now, I don't think software quality has improved at all. So for me to change my mind, I'm going to need to see hard data that says the opposite.
> 9/10 when people do make those kinds of suggestions it's not objectively better, and 9/10 changing it won't move the needle with respect to maintainability or meaningful performance.
The 9/10 number seems incongruent with my experience, but that could be because I often help set the PR review culture, and encourage people to identify the questions: is it "how" (…is it working), "what" (…is it doing), or "why" (…is it done this way). If you recognize those, it makes it way easier for the author to address the specifics without a total redesign. (How = hard to follow the logic, what = better names needed, why = comments needed).
But sometimes a redesign can be super compelling to everyone, and the question is: do we want to be stuck with the subpar version? Because really, the closer you are to remembering how it all works, the easier it is to rewrite it. Not later, when it faded from memory and overgrew with dependent code.
> Comparing the software we produced before the whole PR review for every merge request thing became common, and the software we produce now, I don't think software quality has improved at all. So for me to change my mind, I'm going to need to see hard data that says the opposite.
The kind of subjective reviews we're discussing (not security, performance, consistency, edge case issues, or major simplifications) are not really about quality, they're about your specific team understanding your specific code. Help your colleagues (just the small circle of maintainers) have more confidence in coming in later to change this code. If you're working alone, this isn't as big an issue.
To collect data on this, you would need to witness what happens when everyone architects their own way, and then you have to find the person who wrote that part to decipher it, and compare the amount of slowdown that can cause. Nearly impossible to measure in practice, I think.
> not everything needs to be asynchronous
I completely agree. PRs provide great tools for discussing code asynchronously so it's good to lean on them sometimes, and discuss the work on a call other times. This is however a big departure from your original statement that PRs are the absolute worst time to review the code.
> but that could be because I often help set the PR review culture
I highly doubt you'd be able to do this outside of a small company, or a small self-contained team.
I've seen those exact questions in PR templates before, but they almost always end up ignored or answered in the most obvious surface level manner. I see no reason you'd be able to reliably get reviewers to do any better. Really understanding a PR takes time, and the time to do that is one of the first things that gets jettisoned when anyone is under any kind of pressure to deliver.
>But sometimes a redesign can be super compelling to everyone, and the question is: do we want to be stuck with the subpar version?
If it's really going to make a difference sure. The point is is this the case frequently enough for the process to have a positive ROI. I don't think it is.
>are not really about quality
If they aren't improving quality, then they better be improving velocity. I highly doubt this is the case. I've never personally seen it. Given the enormous cost, I want hard evidence to justify that cost.
>when everyone architects their own way,
That's not the alternative, before mandatory PR reviews, we still had, design reviews, white board sessions, code reviews, pairing sessions, planning meetings etc...
>This is however a big departure from your original statement that PRs are the absolute worst time to review the code.
Catching a problem just before merge after someone has written everything they plan to merge is self-evidently the worst time to catch a problem (next to after it has been merged of course). Now you can argue that it may be the most likely time to catch a problem, but there's no arguing that it wouldn't have been better if the problem had been caught earlier.
My argument is that in addition to be the worst time to catch the problem, it's not actually catching real problems enough to be useful.
> I've seen those exact questions in PR templates […] understanding a PR takes time […] when anyone is under any kind of pressure to deliver. […] is this the case frequently enough for the process to have a positive ROI.
Not templates. The culture HAS TO be for everyone to calm down and treat PR reviews as first class citizens, not unwanted distractions. If ya'll stop stressing each other out that reviews are bs, I'm sure they'll stop being that.
> […] we still had, design reviews, white board sessions, code reviews, pairing sessions, planning meetings etc...
You know that feeling when you get a design idea, you start implementing, and a single plot twist makes you realize it doesn't fit? What do you think happens when a white board session bumps against reality? That's likely a session down the drain.
What if we replace it with the following process: 1) Discover all the plot twists up front by writing out the solution. Publish a PR. 2) When a reviewer comes in, give them time to grok it, welcome redesign ideas. 3) Easily assess and validate their ideas using the stupid amounts of expertise you temporarily gained by having written a working solution. 4) Discuss or proceed with fixups.
How can it get more concrete and grounded in reality than this? One of you knows every tree, the other one can still see the forest. I'd argue, you rarely need to talk before this point. This is the best point.
(Obviously you shouldn't just suffer in silence if you're stuck or your diff is getting out of control. Run it by somebody sooner than later.)
> I highly doubt you'd be able to do this outside of a small company, or a small self-contained team.
Thankfully most companies or teams are indeed small.
This isn't a bad reply but it's fairly off topic in my opinion. I think the majority of reviews on my PR are effectively rubber stamps, but sometimes it takes many business days to get that rubber stamp unless I bug people.
My assumption was that you didn’t have a group of people who would rubber stamp your PR based on the frustration with the wait time.
If you do have people who trust you, a slack DM when the PR goes up isn’t bugging them. Maybe they just don’t pay much attention to GH notifications.
It must be nice working on a codebase sufficiently well-organised to not explode many files distant to an innocuous change.
I didn't say I've never pushed a major bug to production, I've just never had one stopped by a PR review.
At the last place I worked it was common to wait one to two weeks for a PR review. I was the only developer who would do a review within the same day that someone requested it; as a result everyone came to me when they needed a review to actually get done.
When performance reviews came around I was criticized for not getting enough story points merged (because I couldn't get them reviewed) and the fact that I was single handedly responsible for reviewing almost all of the PRs was not deemed to be a valuable contribution to the team.
That is a solid red flag for any workplace.
This is what happens when an organization has no idea what matters or how to measure productivity. You did well to leave.
> I was criticized for not getting enough story points merged
And I bet you learned to stop doing reviews so quickly.
Your coworkers may have already figured out where the disincentives were.
Jesus. Sorry to hear it dude, FWIW I’d love to have someone like that on my team. Hope your current place is better!
Code review should really only exist to guide contributors to the owner of the code they are changing. A new contributor shouldn't need to know the team structure and history of the organization to get changes in. That should be solved by the tooling. Do whatever you want, and the tools will force you to bump into the right people before you can merge.
It's very strange that most companies (maybe GitHub is to blame for this) have something like a free-for-all where anyone can approve anything, as long as they themselves aren't the author. As other comments have mentioned, this reliably creates reviewer cliques as engineers seek out the path of least resistance to get changes in. Those who haven't figured out the game often lose hours of their time to nitpickers and pedants without anything better to do.
This also results in a slow decay since most of the code ends up being written by people who are good at getting code merged, and not people who are good at building the software.
Has heavily depended on employee incentives.
At some employers, particularly the Scrum point counting ones, PRs were unrecognized work, so everyone avoided them unless hounded and then people traded PR approvals at the end to score more points as “done.”
At employers that didn’t care about points complete per sprint, it depended on the overall importance of the work as the team leads jumped in to have them done. But even then, as it wasn’t recognized as work in any way, during crunch it got abandoned.
I think the "not recognized as work" part feels similar to my experience but I don't really understand the psychology of it. Work is not done until it is reviewed and merged, so the review part is necessarily a part of the work cycle. I don't get the sense that you're advocating for the "not recognized as work" perspective, just responding to that viewpoint you shared.
Nobody has ever praised me or (as far as I know) a colleague for reviewing work. Certainly not a manager.
My reviewing doesn't show up in Jira under the amount of work I completed.
No performance review of mine has ever mentioned reviewing code.
In summary, there is minimal credit to be had from doing the work and even when there is credit, nobody lets you exchange that credit for much.
Yes, by rule reviewing is considered work, but it is not work anyone gives you much credit for doing. As an individual with incentives not aligned with the company most of the time, that makes it not worth prioritizing.
So I suppose it is recognized as work, but it is the least rewarded of the work you could be doing.
I feel sorry for you working in such an environment, that really sucks.
I have received praise for my review, I have had it mentioned by my manager, including in performance reviews, I have thankfully had leadership emphasise to everyone on the team how much it matters, and I've had great retrospectives on how we can improve the quality and speed of review that resulted in further meaningful improvements.
Changing culture on this is hard, so maybe the answer is just to find a better culture elsewhere, but I can assure you that it does exist.
if your reward system isn't based solely on your performance review, but rather than the speed and quality in which your project is completed, the respect of your peers, and their willingness to engage with you in kind, then absolutely being a timely and considerate reviewer is rewarded.
I've definitely been praised for regularly doing timely reviews, but I imagine your experience is probably more common and why that mindset is widespread.
It depends on the specific coworker I send it to. I have coworkers who will review within an hour without any additional prodding, and coworkers who will take up to a week unless I prod.
In general for small PRs I consider a consistent latency of over 24 hours to be long and mention it negatively in peer review.
Our standard is that when a reviewer is no longer in flow, review should be the next thing they do. The SLO is 1 business day, which is supposed to be the maximum, and the guidance suggests it should be far lower than that. In reality, it varies wildly by team.
I find most of my PRs being reviewed by my team, on the same timezone, are reviewed within a few hours, but I suspect a big part of that is that most of my PRs are very small (they're actually CLs not PRs, and closer to a single commit in some ways, although hard to draw specific parallels to Git).
It depends on the language/ecosystem a bit, but normally I'd try to keep changes under 100 lines for an efficient PR process. Up to 500 lines is manageable but much slower. Over that is just impossible to do good review on.
Something that my previous company had a lot of success with was review by commit. Making sure that the series of commits in a PR "tells a story", with each one being simple and obvious, and then instructing reviewers to review commit by commit. It can speed things up and increase the quality of review substantially, but does require a bit more work from the author.
If your PR isn't blocking someone nontechnical then I expect you to use developer tooling (i.e. VCS) to continue working without requiring that your PR be merged immediately, and I'll review the PR the next time I come to a natural break in my work. If it is blocking someone, then I'll review ASAP with consideration to the priority of whatever I'm working on.
I work with a guy who, until I said "calm down", would share his PR links in the team chat several times a day. He refused to learn rebase, or seemingly anything other than a standard Git workflow of checkout, commit, push, merge, repeat. He has since improved on both fronts.
Linus torvalds famously said you shouldn't rebase for shared work, it's not a clear cut thing that not knowing how to rebase is bad per se since you shouldn't be using it a lot in the first place in his philosophy, refusing to learn is a different story however.
Specifically he said that changing shared history is bad - that would be the master/main branch or release branches, not e.g. your branch.
Not knowing how to rebase means that in order to stay up-to-date with a shared branch you would have to merge it each time and thus produce something akin to a spruce tree in your commit history.
I've always established the team rule that most PRs must be reviewed within a day. At the very least, devs should be reviewing any open PRs before they leave or when they first get in.
Obviously, there's going to be exceptions for large changes or big rewrites on the one hand, or for tiny label changes on the other hand.
PS. Reading the comments here, people seem to work in some pretty dysfunctional environments. Get the team together and establish some guidelines that everyone agrees on. Review time should be predictable, and the work should be reasonably shared.
On my team we expect that everyone looks at PRs at least once a day. If something is particularly pressing, they can post a message to our internal team channel for a faster response.
Inside my team, we generally review quite promptly (same day, or the next morning). Across teams — it's more variable and depends a lot on context (or lack of context). If I'm chasing down reviews, I'll usually tag people to start out, and then message them directly to ask for reviews 24 hours later, if they don't get back to me sooner. It's rare that someone doesn't review same-day if I ask them directly.
I often find the reviews to be helpful. People come up with edge cases I haven't thought of, or notice things to clean up. I try to proofread everything before asking for reviews, but after I've started at my own diff for a while, it's hard to read it with fresh eyes.
Sometimes we ask for reviews on more experimental PRs too, just to get early design feedback before wasting too much time on polishing the changes.
The Firefox team’s code review guidelines recommend:
> Aim for a response within 1 business day. This need not be a full review, but could be a comment setting expectations about when you’ll be able to review the patch, pointing to other folks who might be able to review it sooner or would be more appropriate to review the patch.
https://firefox-source-docs.mozilla.org/browser/FrontendCode...
To speed up reviews that don’t require a particular individual’s approval, some Firefox teams create (Phabricator) reviewer groups for different feature areas (such as graphics or Windows code) so anyone in that group can approve a PR.
The biggest issue with PR reviews in my experience is not how long they take, but the justifications people provide for their feedback. I don't think you should ever leave a comment if you don't have a convincing answer to "why" you left it.
The change request must either make the code objectively better (i.e. performance, memory, security, adherence to a consistent architectural pattern, simplification), or subjectively clearer (it's hard for you to understand what is happening, how it happens, or why it's done this way).
If the reason is solid, then the feedback is usually well received. Except in unfortunate environments where PRs are treated as annoying disruptions. I hope I never have to work in that kind of place.
We expect same day or next day response. Of course only if you're in. But if you're not I might reassign. Also if it blocks me I will ping you in less than a day or straight away.
My most common response is "needs no further review if these points are addressed". On rare occasions my only response is "need more time". Some changes take time to settle in my mind. No good comes from rushed comments.
There is an expectation not to haggle over dumb shit. When you decide to pick a battle, pull in a second reviewer for their opinion.
I'm sure you've heard the "trick" of splitting up large merges to make them easier. Yes this is good up to a point. Other than that, it really is team commitment to timely reviews. No tricks to it.
In my department (DevOps/Infrastructure/CloudOps or whatever you wanna call us folks who write terraform and helm) we have a couple of set standing weekly meetings that we all get together, and one of the frequent things that happens is PRs get brought for group review. So, the longest anyone has to wait is a couple of days.
We also have a dedicated thread for PR review requests in our private team chat channel, which helps bring visibility to our-team-specific PRs that can get lost in the churn of github notifications from all the repos we’re involved with.
on my teams, I've set expectations that people look at PRs between tasks. Don't stop flow for it, but don't jump straight to a new task or come back from a break without making sure nobody is blocked waiting on reviews. If the whole team follows that method, and keeps their PRs fairly small-ish, PRs typically fly through the system just fine, and it also spreads the PRs across the team.
We've gotten to the point at my company where PRs are almost a formality. PR -> merge -> Put on environment for QA. QA finds issues, fix issues with new PR, merge, and so on until we have our release.
My last company was much more methodical with actually reviewing code but honestly? I kind of like the trust in developers and we're able to get code out a lot quicker.
On all the teams I’ve worked on the written standard has been one business day, but most people get to them faster. I always make sure to tell an author if I don’t think I can review their code same day. Just a comment saying “hey this is a big PR and I’m busy, ill get to it tomorrow if nobody else beats me to it”
As a reviewer, I've been guilty of letting a PR sit for a month, mostly because there were higher more critical things that needed my attention.
Lately, I don't want code to be perfect, because my code never was in the first place. I want it to be just good enough for me or anyone else in the team to iterate on.
Stacked pull requests solve this.
It doesn’t matter how much you wait for review of a PR if you can just build on top of it.
Also, this workflow encourages smaller PRs that are faster and easier to review.
We implement this at my company through a tool called Graphite and review times usually don’t exceed 1-2 days.
It depends but if I break them down then same day. If large I need to beg.
To me the ideal system would have a reviewer assigned at the start that is expecting to review and you keep them updated on the design.
The median case is probably that the author shares it as ready for review, and so the review is ~immediate.
I hate that. Mine have a much slower time to review; sometimes I eventually give in and share them too.
Typically no more than a day. But let me caveat it by an anecdotal observation that the duration is roughly proportional to the cube of LoC(changes).
Agreed. Many smaller MRs are likely to get the same amount of changes accepted in far less time. Some of the factors include:
* After each change you have to review the whole MR, either because you have to rebuild the context or because it's not obvious how the new changes interact with the old ones. Hopefully the new review won't take as much as the first one, but still it's linear, and the number of changes request is also likely linear in the MR size, so that's already total quadratic.
* More iterations often mean higher review fatigue, at some point I don't want to rehash the same code over and over again, and I tend to postpone bigger MRs in favor of smaller ones.
* Larger MRs are correlated with the author not having a full grasp of what they are doing, and therefore are unable to segment it properly and organize the code properly. Sometimes there are not many alternatives to a large MR, so that's not a hard rule, but there is certainly a decent correlation.
In my experience (admittedly with a company that is not necessarily statistically meaningful) when I began doing code review that helped with my own MRs getting reviewed (and also contibuted positively to performance reviews). First and foremost because it helped me getting a better understanding of the whole code base, not just the parts that I ended up touching; and then, I suppose, also because it made me more trustworthy towards the code maintainer.
The real question is, are you pulling down the code and running the PR locally before approving it, or just looking at the code.
Work that requires coordination of independent schedules takes longer than the theoretical minimum.
It's a queuing problem. See: https://www.johndcook.com/blog/2008/10/21/what-happens-when-...
The slowness of processes that involve other people will frustrate you until you realize it is an ordinary part of the work.
My advice, is behave in a way that makes you the kind of person other people will want to work with. Turn your work around quickly. Be patient with other people's turn around time.
Good luck.
I totally agree, waiting a long time for PR reviews can be so frustrating! This used to be a pain at my company too, but we've been able to dramatically improve the review time. We target getting a review, on average, within one hour of posting a pull request. Most weeks we now achieve that (or get very close)!
I think it comes down to two main pieces: culture & tooling.
Culture: - Reviews are part of the job. A solid senior developer isn't just writing code all day -- there are many aspects to the job and reviews are an important one - Reviews shouldn’t be viewed negatively or as something we "have to do." They’re opportunities to learn, teach, collaborate, and improve as a team - Make small, well scoped PRs. Do not try to roll a bunch of changes into one PR. It makes it harder to review, riskier to merge and harder to rollback. - Reviews should be owned by the team. Often there doesn't have to be just one developer who can do a review. If a PR is waiting on a developer who has a morning full of meetings or is fixing a high priority bug, then another team member should pick up the review - Prioritize code ready for review over code still being written. Code in review represents significant business effort (customer feedback, planning, design, development), but its value isn’t realized until it’s merged. Prioritizing reviews can help deliver that value faster - It’s awesome as an author to get a quick review, so "treat others as you’d like to be treated." - (Likely not feasible everywhere, but helpful for us) Skip reviews for trivial changes like copy updates, test fixes, or non-critical dependency upgrades
Tooling: - Automate as much as possible. Linting, tests, and other automated checks can reduce manual review effort - Make sure there's a process for everyone getting notifications quickly. Turned out some devs would use github notifications and only checked those like once a day, while others got real-time slack alerts (guess which group tends to complete reviews faster) - Have a way to set & track your review time target as a team, try to make an automated way to keep the team accountable & aware of where they're at
Obviously some of these things aren't easy to change and take time. I'm fortunate enough to be a TL so I was able to influence my team and the engineering department more directly, but I think a respected developer can bring forward this kind of change too. To also address some of these points for our department, I built a Slack <> GitHub app to help our team move reviews faster. Feel free to try it if you're interested: https://pullpro.dev I’ve been working on making it accessible outside our org and would love to hear any feedback if you give it a go!
My team tries to stay on top of code reviews as they come in, but this has been the practice as a very small team that has worked together over the course of three years. Generally we trust that each developer is running automated tests, manually testing and refining in a local deployment, and has the right context and knowledge for what they're changing. I can have code reviews open for hours or days, but it's a red flag if it's more than a few days.
However, we're onboarding a lot of newer developers and actively trying to slow this pace now. CRs need a lot more scrutiny in this case.
The second Friday of every month.