As funny as it sounds, the best code reviews are the ones that actually get done.
One of the most touted advantages of open-source and free software is peer review. “Many eyes make all bugs shallow” is the standard refrain. This assumes that you actually have many eyes looking at all portions of the code and that those eyes know what to look for. Code reviews are like sex, just about anyone can do it, but skill and training can make you a lot better at it.
For both of the programmers who have not already heard the lecture, here are the advantages of doing code and design reviews:
The earlier bugs are found in the life cycle of a product, the cheaper and easier they are to fix.
If someone else looks at your code or design, they are likely to find mistakes you missed.
When you know that someone else is going to be looking at your code, you are a lot more likely to tidy it up and make sure there is accurate and up-to-date documentation.
You can learn a lot by reading other people's code.
More than one person who is familiar with a program is the best insurance against “Mack Truck Syndrome”, which is when the only person who understands the software gets hit by a truck, leaves the company or is for some reason not available for consultation.
It can be a means of establishing quality metrics, so one can measure the effectiveness of different quality processes.
The process of explaining your software to someone else can help you actually review your own program, rather than just looking at and seeing what you expect or want to see.
When done right, code and design reviews can save time and improve quality over the entire project life cycle.
The disadvantage of code reviews is that they take time, not only of the person actively working on the project, but also for other people who are usually under deadline pressure themselves. Despite the fact that there are numerous studies showing that the overall number of human-hours spent on a project is lower when reviews are properly done, there is the constant temptation to bet that there really aren't any problems. That means waiting until the code is written and being debugged to try to find problems.
I'm nearly done preaching about why peer reviews are a good thing. I will say that the most important thing about reviews is that they actually get done. A quick and dirty review that finds only a third of the bugs is more effective than a thorough, exhaustive review that no one actually performs.
There are two basic types of reviews: walk-throughs and formal inspections. In a walk-through, the author takes one or more colleagues on a tour of the document under review. In my experience, about 80% of the errors found in a walk-through are actually found by the author in the process of explaining the document. Eli Weber, a former colleague, said “If I could talk to the wall as if it were a person, I wouldn't need someone else in order to do a code review.”
In a formal inspection, one or more people are given a document design or program to review. Oftentimes each person concentrates on different things: adherence to style or programming standards, logical errors, completeness of documentation, etc. It is common practice to make checklists of what each person is to look for, such as coding style rules, common mistakes, potential security holes, etc.
There is no problem with someone finding an error that they weren't concentrating on. For example, if the person checking for compliance to coding standards notices an AND operator being used in place of an OR, they should definitely note it. But not every person is trying to make sure all aspects of the document are correct. When a reviewer finds a problem, they note its location and its severity. Priorties range from “something you might consider thinking about changing if you find yourself with absolutely nothing to do and the boss is in a bad mood”, to “if this error is not fixed immediately it will set in motion a chain of events leading to the end of civilization as we know it”.
Once everyone has reviewed the document, a meeting is held to discuss the errors found. The errors can be discussed in just about any logical order: by page of the document, by severity of errors or each reviewer can list all of the issues they found. This meeting is usually run by the author of the document, but it could be run by anyone. As each issue is mentioned, the people at the meeting come to a consensus as to the severity of the issue. Often, the reviewer may not be able to assess the severity of an issue but may just have a hunch that something is not as it should be and will ask the other attendees their opinions. It is crucial to remember that the point of the meeting is not to solve the problems but merely to log them. I know full well how hard it can be to keep a room full of engineers from immediately trying to solve an interesting problem, but the facilitator must remember to keep the meeting on track, or it will go on for weeks and will cost more than it saves.
This, at least theoretically, is how a code review works in an industrial or educational setting, where there are several people in close proximity
But what about the open-source world, where the people are working on a project are not likely to be in the same country, much less city? There are some tightly managed projects with mailing lists where someone will propose a change and anybody on the mailing list can take a look at it at their leisure. If they notice something they think may be a problem (“You infidel, you indented by three columns when by everything right and holy you should indent by two!”) they can send e-mail to either the list or the author for calm and rational discussion. Other projects will follow the model where someone needs a program to do something and not finding it already available, they write it themselves. Once it is working to their own satisfaction, they make it available to the public, possibly posting it to freshmeat or another similar forum. Sometime later, someone else needs a program to do the same, or similar task. This time they find the one that was just written, they download it and it doesn't run. They figure out where it is crashing, find the offending code, and motivated by the kindness of their hearts and goodwill toward their fellow geek, inform the original author of the bug and send a suggested patch.
Notice that in neither of these scenarios is there any guarantee that if you write software it will be reviewed by someone, or if it is, it will be reviewed soon enough to be useful. How can a conscientious programmer make sure that their code is not only reviewed, but reviewed in time to be useful?
Most geeks have geek friends. Writers often form groups that meet once a week to review and critique each other's stories. The same could be done for software. Participants could e-mail the design or code to be reviewed sometime before the meeting. Using the formal inspection methodology, they could use the meeting time to collect notes about issues found. Alternatively, when a design or program is ready for review, an e-mail can be sent to a mailing list inviting geeks to meet at Hans' Pizzhaus for pizza, beer and a code walk-through.
If there isn't anyone locally to help, most of the work of a formal inspection can be accomplished via e-mail. The key is to get a group of people who will actually review eachother's code rather than just quickly glance over it. Likewise, most of the errors found during a walk-through are found by the author while explaining the code. If there isn't anyone around you can convince to sit down and listen while you explain your software, and if your cat does not possess the raw programming talent of Richard Stallman and Dennis Ritchie combined, you can perform a walk-through with an “internet audience” by thoroughly documenting your software. The process of sitting down with your program and writing a document to explain what it is doing should force you to look at it from a perspective that allows you to find annoying bugs like a misplaced parenthesis.
Doing a code review can significantly improve the security of a product. First of all, if other people look at a program it's a lot harder to leave back doors in it, such as “if someone logs in with the username of f00Bidoo, automatically grant root privileges”. However, most exploits, especially of open-source software, are written by probing or searching for known classes of errors—sort of a “malicious code review”. In order to close such holes, it is necessary to find them before someone with unfriendly intentions does.
Searching for bugs in the code is, as mathematicians would say, necessary, but not sufficient, for security. Open BSD addresses security through aggressive code reviews. The security team at Open BSD also does tons of research. Theo de Raadt, says “In fact, it is what we focus on. Trying to find NEW kinds of programmer errors, in old code. Code that exists, which has common mistakes that people never think of. Like fd_set overflows for instance. In Open BSD, buffer overflows have basically been extinct for about three years.” For more information about the Open BSD philosophy and goals about security, check out www.openbsd.org/security.html.
In summary, most of the errors found in a code review are found because someone actually sees the code, whether it is the peer reviewer or the author in the process of explaining it to someone else. Likewise, it is better to have a modest inspection methodology that actually gets implemented versus a grandiose scheme for which there is never quite enough time to get implemented at all. When one is concerned about security, there are tremendous gains to be had by carefully checking the code for errors, but that is not sufficient. One must also look at the code with the eye of someone that wants to compromise the security of the system. One must not only look for cases where the code won't do what it was supposed to, but where it can be made to do things that it wasn't supposed to as well.
I'd like to thank Theo de Raadt for his invaluable assistance in discussing Open BSD and reviewing code for security issues.