I review a lot of code with the mindset of yes and…
Basically when I start the PR is approved in my head until I find something blocking. I.e. major problem that causes dataloss, big performance issue or breaks other code. Anything else is a minor comment at most. The PR is approved by default. This gives the dev responsibility and ownership. Plus it increases release cadence
Doing a follow up PR to improve or fix something is just as fast as blocking the MR, but blocking the MR can be bad for morale.
This strategy might work better in young startups where having the feature _exist_ is better than not shipping.
in my experience this builds up responsibility and ownership and removes the whole policing of other peoples work vibe around code review.
Also decisions and discussion around formatting, when to test, design, features, functionality, architecture should not happen during code review, they should have happened way before coding, or collaboratively while working. Code review is the worst time for that stuff, imho it should be to sanity check implementation.
I used to do a lot of this in a small team where we didn't block on reviews (we did reviews but didn't block). I was a senior developer on the team and I'd take time to read through new sections of code that came in. That worked pretty well.
Interesting enough, this bit of code/project, that didn't have super strict code review requirements, but had a lot of tests, is the code I worked on that I would consider the most robust/high quality. It was run by > 10 million users in a fairly important application. It wasn't huge and it had good tests (and was generally quite testable).
That said, it's really hard to control review-after-commit. Maybe we need better tooling for that. In my case, for the areas of code I was involved in, it was small enough to track in my head.
I'd love to explore that alternative, but I'm not sure how to actually make sure that any errors found/changes suggested after the commit is pushed and deployed actually get implemented.
I really like this idea! It's not like I need to check how any individual developer approaches their work (although that could become a useful mentoring session in some cases) but what matters is what it looks like before going into production.
The main difficulty I see with the described approach is that different changes will be interleaved in the trunk and it might be hard to extract just one of them to deploy. But that's what feature flags are for!
Code still gets reviewed, but you don't end up with PRs languishing for hours, days or even weeks waiting to get a review from someone.