Test-Driven Development (TDD) Principals
Following text in a MSDN article is the best description of required steps for test-driven development using Microsoft Visual Studio. It is not a walkthrough, instead it express the principals:
- Understand the requirements of the story, work item, or feature that you are working on.
- Red: Create a test and make it fail.
- Imagine how the new code should be called and write the test as if the code already existed. You will not get IntelliSense because the new method does not yet exist.
- Create the new production code stub. Write just enough code so that it compiles.
- Run the test. It should fail. This is a calibration measure to ensure that your test is calling the correct code and that the code is not working by accident. This is a meaningful failure, and you expect it to fail.
- Green: Make the test pass by any means necessary.
- Write the production code to make the test pass. Keep it simple.
- Some advocate the hard-coding of the expected return value first to verify that the test correctly detects success. This varies from practitioner to practitioner.
- If you’ve written the code so that the test passes as intended, you are finished. You do not have to write more code speculatively. The test is the objective definition of “done.” The phrase “You Ain’t Gonna Need It” (YAGNI) is often used to veto unnecessary work. If new functionality is still needed, then another test is needed. Make this one test pass and continue.
- When the test passes, you might want to run all tests up to this point to build confidence that everything else is still working.
- Refactor: Change the code to remove duplication in your project and to improve the design while ensuring that all tests still pass.
- Remove duplication caused by the addition of the new functionality.
- Make design changes to improve the overall solution.
- After each refactoring, rerun all the tests to ensure that they all still pass.
- Repeat the cycle. Each cycle should be very short, and a typical hour should contain many Red/Green/Refactor cycles.
And I like following advices quite useful from (http://msdn.microsoft.com/en-us/library/ms379625.aspx):
- Avoid creating dependencies between tests such that tests need to run in a particular order. Each test should be autonomous.
- Use test initialization code to verify that test cleanup executed successfully and re-run the cleanup before executing a test if it did not run.
- Write tests before writing the any production code implementation.
- Create one test class corresponding to each class within the production code. This simplifies the test organization and makes it easy to choose where to places each test.
- Use Visual Studio to generate the initial test project. This will significantly reduce the number of steps needed when manually setting up a test project and associating it to the production project.
- Avoid creating other machine dependent tests such as tests dependent on a particular directory path.
- Create mock objects to test interfaces. Mock objects are implemented within a test project to verify that the API matches the required functionality.
- Verify that all tests run successfully before moving on to creating a new test. That way you ensure that you fix code immediately upon breaking it.
- Maximize the number of tests that can be run unattended. Make absolutely certain that there is no reasonable unattended testing solution before relying solely on manual testing.
To read a walkthrough for creating a unit test using VSTT refer following articles:
- A Unit Testing Walkthrough with Visual Studio Team Test
- Unit Testing and Generating Source Code for Unit Test Frameworks Using Visual Studio 2005 Team System