At Audacia all code is reviewed by at least one other person before it goes through any other form of testing. It is the first quality gate at which potential issues can be raised and addressed.
Code reviews at Audacia, like most companies, are primarily conducted online via pull requests.
Having a set of code review guidelines that encourages kindness can help to foster a more inclusive and collaborative working environment.
At Audacia we hire developers and testers at every stage of their career, from entry-level employees with no experience, to industry veterans with 10+ years under their belt. These guidelines help our junior employees by creating a safe space where they can learn without some of the more destructive code review practices that occur elsewhere in the industry. They also provide opportunities for senior employees to grow as mentors and leaders.
We designed guidelines to cover how authors should prepare and publish code for review, and how reviewers should conduct themselves when performing a review of someone else's code.
Always ensure that the code builds and all tests pass before submitting a pull request.
Pull request build validation should be in place, which is part of our branch policy guidelines.
Further reading here .
A self-review of a pull request can often find issues such as typos, commented out code and formatting errors.
If there are changes that you're not sure about, or would like a reviewer to take a closer look at, then you can notify the reviewer by adding comments in the appropriate places.
Further reading here .
The larger a pull request, the less likely reviewers are to spot issues. This study found that the upper limit of lines of code that can be effectively reviewed at one time is around 400, as the brain's ability to process information (and therefore find defects) diminishes beyond this point.
Small pull requests are easier when user stories are broken down effectively, however even for large user stories, submitting multiple pull requests is almost always possible.
One way to think about this is to group related changes into a single pull request. For example, if there are linting fixes along with functionality changes for a particular card, then you should submit separate pull requests.
The pull request title and description should be used to provide context to the reviewer as to the purpose of the change. We also have commit naming guidelines, which leads to more meaningful commit messages that can then form the basis of the pull request description.
Further reading here .
Styling/UI changes are extremely hard to review purely by looking at the code. Screenshots and/or GIFs actually showing what the result of the change looks like in the application can be invaluable.
Further reading here .
It can be very tempting to resolve comments yourself when implementing a suggested change, especially for trivial changes such as typos. However the only person who can judge whether something is resolved is the person who created the comment in the first place, i.e. the reviewer.
If a reviewer is happy to let the author resolve a comment then they should communicate that in the comment itself, e.g. by adding something like "Happy to let you resolve this yourself once it's addressed".
It's easy to dive straight into a code review by going through line-by-line, looking for issues, without ever considering what the code change is actually supposed to accomplish. You should always read the user story or bug linked to the PR, understand the AC/repro steps, and check that the change meets the functional requirements.
You should also always consider non-functional requirements, including but not limited to:
- Security (e.g. does the code adhere to our secure coding standards)
- Usability and user experience (e.g. have the appropriate UI elements been used?)
Phrase suggestions as questions rather than demands. This is much less confrontational, and the dialogue it prompts can reveal the intention behind certain decisions. Reviewers usually don't have the full context in which choices were made, so don't assume that you know best. For example:
Bad: "This method should be renamed: I'd go for
Good: "What do you think about renaming this method: perhaps
Further reading here .
As a reviewer, the code being reviewed is likely something that you will have to maintain in future, therefore it's important that you understand what it's doing and why. As a junior tester or developer it is easy to assume that more senior colleagues are right and their code must be perfect, but this is definitely not true! Everyone makes mistakes so go into the review with an open mind.
If you don't understand the code then ask questions: it may be that comments or documentation is needed, or that methods and variables could be renamed to add clarity.
Phrase feedback using 'I' or 'me', rather than 'you'. For example:
Bad: "You implemented this in a really confusing way."
Good: "I didn't understand this implementation."
Phrase feedback to make the code the focal point, not the person who wrote the code. For example:
Bad: "You wrote a race condition here."
Good: "The code has a race condition here."
Ensure your starting position is "there is a valid reason this has been done in this way". As mentioned above, reviewers very rarely have the full context behind decisions made when writing the code being reviewed, therefore giving the author the benefit of the doubt rather than assuming they're wrong and you're right is always the best approach. If you still think something is worth commenting on then ask questions about the code using I-messages.
Not every solution will be exactly what you would have implemented. Most of the time this is ok, and asking someone to change code purely to fit your personal preference is unreasonable. Prefer compromise and pragmatism over rigidity and pedantry.
It is important to realise that any suggested change has both a benefit and a cost to it. The benefit of making a change is generally that the codebase will be slightly improved as a result. The cost is usually twofold: the change will take time, often involving context switching for the author; and unless it is managed sensitively, the process can have a negative impact on the confidence and overall wellness of the author. Always weigh up whether the change you are suggesting is really worth it.
If you are too critical then the author will likely be less receptive to feedback. This can mean that important feedback is not taken on board as the unimportant nitpicking has annoyed them to the point of them shutting down. As a reviewer you need to pick your battles, and sometimes that means letting the smaller things go.
If you must raise something, then signify it's a nitpick by prefixing the comment with
nit:. For example:
Bad: "Could this extra whitespace be addressed?"
Good: "nit: Could this extra whitespace be addressed?"
On a similar theme, if a pull request contains the same mistake in several places (for example, missing braces in an
if statement), don't add a comment for every single instance. Instead add a single comment making the author aware of the issue, or better yet speak to the author to explain the problem. Firing off tens of comments about what is essentially the same thing is a waste of time.
You can ask yourself three questions before submitting feedback:
- Is it true?
- Is it necessary?
- Is it kind?
Is it true?
Don't make statements that are not actually facts; instead rephrase them as questions. For example:
Bad: "Never use a singleton."
Good: "Could you explain why a singleton has been used?"
If something is a fact, then back it up with a source. For example:
Bad: "Don't use camel case for method names."
Good: "Our C# coding standards say to use Pascal case for method names."
One rule of thumb is to avoid using the word just. It can come across as meaning "why didn't you think of this yourself?". For example:
Bad: "You can just use a constructor for this."
Good: Have you considered using a constructor for this?"
Another word to avoid is obviously, as what's obvious to you is not necessarily obvious to everyone. For example:
Bad: "It's obviously a good use case for a do...while loop."
Good: "This might be a good use case for a do...while loop. What do you think?"
There comes a point in the back-and-forth of pull request comments where you should just speak to the other person. Often explaining feedback face-to-face is much more efficient, as you can more easily provide context and examples, and can even collaborate with the author on a solution.
Asking the author to fix or refactor some code unrelated to their feature or bug fix "while they're at it" is a technical form of scope creep. It slows down the team and leads to issues with estimates and velocity. Instead raise a new card to deal with the issue that you've spotted.
Further reading here .
Online communication is fraught with the potential for misunderstandings, stemming from the fact that we can't read facial cues or expressions. There are a couple of things we can do to reduce the risk of comments being misinterpreted.
Informal language or abbreviated words can sometimes come across as abrupt or even aggressive. Using correct grammar and punctuation can lead to clearer communication with less room for ambiguity.
Bad (the comment could come across as abrupt): "k."
Good (less abrupt; could be further softened with an emoji - see below): "Ok, that's fine."
Using positive emojis (such as a smiley face) can help to clarify the intent behind a comment or remark. For example:
Bad (the comment could come across as abrupt or hostile): "I don't mind, the method can stay as it is."
Good (clear the intent behind the comment is friendly): "I don't mind, the method can stay as it is 😃"
Further reading here .
Note don't use negative emojis to signify dislike of something - use your words.
Don't focus solely on potential changes; it's important to also mention things you like or learnt from. Aim for one positive comment per pull request or more.
Praise doesn't need to be reserved solely for large or groundbreaking changes; praise should be given for a wide range of things. For example, if an author has had a tendency to make a similar kind of mistake across multiple pull requests, if you are reviewing code that eliminates this mistake, however small it may be, that warrants praise.
Positive feedback is particularly important for junior testers and developers, as it helps to build confidence and to reinforce learning, therefore particular attention should be paid to providing this feedback.
If a pull request is all good with no feedback to raise, then add a single comment to say so, e.g. "Good job!" or "Great work!".
Upon completion of a code review you should perform one of three actions on the pull request:
'Approve': in other words, everything is ok and the pull request can complete (if this is first round of the review, i.e. no feedback has previously been raised, then this should be accompanied by a comment - see give praise)
'Approve with suggestions': comments raised are optional (e.g. nitpicks) and the author can complete the pull request without them being addressed if they choose to
- In this case it's implied that the author can resolve comments themselves if they're not going to be actioned (which is an exception to this rule)
'Wait for author': at least one comment raised must be resolved before the pull request is completed
The 'Reject' action should never be used. If you think a pull request is fundamentally flawed then have a conversation with the author to discuss how to progress and, if it is mutually agreed that the changes should be started again, then the author should abandon the pull request.
Test automation code (e.g. UI or API automation) is typically written and maintained by testers, but it is common for developers to review and provide feedback on this code. Developer input is actively encouraged and valued, but because the code is owned by the test team there are some additional things to bear in mind:
Try and read the code from the point of view of the tester
- Test automation code must, first and foremost, be readable and understandable
- Some refactorings and abstractions can actually make the code harder to work with rather than easier
Don't take over - the code is owned by the tester
- In particular things like test titles/method names, comments and the high-level structure of the test (i.e. what steps it takes in what order) are purely the domain of the tester
Suggest potential improvements, but always provide explanation, and remember it's up to the tester as to whether they implement the suggestions
All automated test code must be reviewed by another tester as well
In this blog post we have covered our code review guidelines. By following these steps, we can ensure we focus on supporting code authors with an aim to make the whole review process more valuable. One of our core company values is 'single team', which means that we actively promote collaboration and break down silos wherever possible. Following these guidelines aligns with this value by making code reviews more collaborative and encouraging team members to work together.
Further reading on code reviews, including the sources for most of the guidelines above, can be found in the following places:
- Philip Hauer's Code Review Guidelines
- Unlearning Toxic Behaviours in a Code Review Culture
- Curated List of Code Review Articles
- Compassionate Coding
- Michaela Greiler's Blog (particularly Code Review Best Practices , Code Review Pitfalls and Don't Teach During Code Reviews )
- The Ten Timeless Commandments of Egoless Programming
- GitLab's Code Review Rules
- Google's Code Review Guide
- Tips for Great Code Reviews
- Plaid's Code Review Culture
Audacia is a software development company based in the UK, headquartered in Leeds. View more technical insights from our teams of consultants, business analysts, developers and testers on our technology insights blog.