What is the best project tracking software

What is the best practice for reviewing source code in a source control repository?


What is the best way to manage verified source code in a source control repository? Should the source code go through a review process before check-in, or should the code review happen after the code has been committed? If the review occurs after the code is checked into the repository, how should it be tracked?

Reply:


Google has the best code review methods in every place I've ever seen. Everyone I met there agrees how code reviews should be done. The mantra is "look back early and often".

Suppose you are using a process similar to what Graham Lee suggested. (This is a process I used myself previously.) The problem is that reviewers are asked to look at large chunks of code. It takes a lot more effort and is harder to get the reviewers to do. And when they do, it's harder to get them to do a thorough job. Also, when they notice design issues, it is more difficult to get developers to review all of their working code to improve it. You still catch stuff and it's still valuable, but you won't realize you are missing out on over 90% of the usefulness.

In contrast, Google leads code review every commit, before source control can be started. Many people naively think that this would be a difficult process. In practice, however, it doesn't work that way. It turns out to be much easier to look at small pieces of code in isolation. If problems are encountered, changing the design is a lot easier because you have not yet written any code for that design. The result is that it is much easier to do a thorough code review, and it is much easier to troubleshoot changed problems.

If you want to do a code review like Google does (which I really recommend), there is software out there to help you do it. Google released its subversion integrated tool as Rietveld. Go (the language) was developed using a version of Rietveld modified for use with Mercurial. There is a circumscription for people who use Git called Gerrit. I've also seen two commercial tools recommended for this, Crucible and Review Board.

The only version I've used is Google's internal version of Rietveld, and I was very happy with it.


One technique I've used across teams is the following:

  • Developers can integrate Source into their own branch office or local repository without verification
  • Developers can integrate into the trunk / master without checking
  • Code must be reviewed and the review comments addressed before the trunk / master can incorporate it into a release candidate branch

It is the responsibility of the code author to request a review and the release industry maintainer to ensure that only reviewed code is merged.

There are tools that help with code review, but I've never used them. The repo can be used to track who carried out the check for a merge. I used svn properties and Perforce jobs associated with commits to show who was reviewing what.







I never separated the code for verification based on set / unspecified criteria. The only criteria I came across are green unit tests and integration tests.

For tracking, I would recommend updating the flow in your favorite issue tracker. For example instead of:

  • Product Owner -> Analyst -> Developer -> Quality Assurance -> Release Engineer

You may want to introduce another stage (review):

  • Product manager -> Analyst -> Developer -> Tester -> Quality assurance -> Approval engineer

Therefore, for each ticket in the status " Implemented " assign a reviewer, and only checked Tickets are forwarded to quality assurance.


I've only had one code review experience, so I can't say how good it is.

I've worked with a small group of programmers (~ 10-15) and we used VS Team Foundation Studio. We were asked to commit the code about once a day and review it by someone else in the group (hopefully someone who was also involved in the project) before each commit code. During the commit, the person's name was also put in a field.




I worked on a team that checked everything that was checked in from change to change in a few reviews a week. This meant we weren't always up to date with code reviews, but achieved what we set out to do.

So first ask what you want to achieve by reviewing the code. In our case, it wasn't about catching idiot developers, rather there was a guess of competency rather than a guess of incompetence. This allowed the team to look into other areas of the system and correct some questionable design decisions before they were set in stone. By questionable, I mean that there is always more than one way to skin a cat, and not everyone knows that there is already a cat skin knife in the toolbox, so to speak.


The way we approached code reviews was with our project tracking software reviewing every task. At the time we were using Mantis and SVN. Our project commitments were tied into both systems. Each commit had to be tied to a task in the praying mantis. When the task was completed, it was given the status "Ready for Review".

RFR items were then either picked up by someone who had time for reviews or they were assigned to a specific person for review. On Fridays, all RFR articles had to be checked before the end of the day so that there were no transmissions in the following week.

The only problems we encountered in this process were large objects with tons of files. To handle this, the encoder and the verifier would come together and the encoder would iterate through the changes until the verifier understood them. They would do the code review together.

That process collapsed when management mandated that peer programming did not require a separate code review. Developers have been careless about the process and little silly bugs have been introduced. Eventually we went back to the original process and things came back together.


In my team we have been doing training for about a year and it seems to be working very well.

Our organization uses Perforce for version control. Perforce (since a year) includes a feature called Shelving. With shelving, I can "postpone" my changes for a specific problem. They are stored in the version control system but are not checked in. Then I'll ask another developer on my team to review the code.

The other developer can view my pending changes in Perforce from their own computer and compare the changes with the latest revisions. He can also disconnect from his local computer if he wants to try out my changes. When he finishes the review, he'll let me know. I then check in my code with "Reviewed by Bob" at the bottom of the comment.

That worked out very well for us. First, code reviews in general have been found to be extremely helpful. In addition, the Perforce shelves allow us to do the checks without having to check in or face major difficulties, even though our team is geographically widespread - this is very important. And it works great.

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from.

By continuing, you consent to our use of cookies and other tracking technologies and affirm you're at least 16 years old or have consent from a parent or guardian.

You can read details in our Cookie policy and Privacy policy.