Thanks to everyone who joined us for the first event in our TDD 101 Learning Series. If you missed the event, take 20 minutes to watch the Intro to TDD recording. We had some great attendees who asked several interesting conversations, all of which we weren’t able to address during the event. To follow up, and invite others to participate, we’ve included a complete Q&A below.
Intro to TDD Q&A
Who writes and owns unit tests—the development team or the test/QA team?
TDD is a programming technique, so the development team writes and owns the unit tests.
How can I measure the quality of unit tests?
The real quality of the unit tests can only be observed over time. Do the tests uncover issues during development instead of during testing? Do the unit tests define the correct behavior? Are there unit tests that test good data, bad data, and invalid/null data? Make sure you attend our next TDD webinar, Writing Good Unit Tests, on July 13 for more information.
Can we use TDD for code coverage? Or does the test team need to invest time in automation for code coverage?
The goal of TDD is clean code that works. A side effect of TDD is a high percentage of code coverage. Theoretically, if you write a unit test for every change, then you should have 100% code coverage, even though that’s not the goal of TDD. If you have a code coverage goal, you’ll still need a tool to examine the code coverage. If code coverage isn’t at your goal then you will need to add additional tests or work with development to determine why there are changes without unit tests.
Are unit tests only suitable for classes and how would you approach applying unit tests to a legacy system?
In TDD, a unit test must be created every time you make a change (classes, methods, etc.). For information about applying unit testing and TDD to a legacy system, check out our Beginning Test-Driven Development in a Legacy System webinar on August 3.
“A single unit test should not verify too much functionality.” How much is too much?
If the test fails, you should immediately know what part of the code the bug is in. If not, you are verifying too much functionality. Some have argued that each unit test should have only one ‘Assert’. While this is extreme, if you find yourself with more than a handful or if your Asserts are verifying unrelated logic, then consider splitting the test out to multiple tests. Check out our Writing Good Unit Tests webinar on July 13 for more information.
What UT do you recommend for .NET development?
I’ve used csUnit and found it intuitive and easy to use. You can find a list of other frameworks here: http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#.NET_programming_languages. Obviously, you’ll want to research them to learn more before choosing one.
Does UnitTest++ support a test execution panel like JUnit?
UnitTest++ runs the tests when the program builds, and success/failure is displayed in the output pane as Build output.
Testing class private methods—yes or no?
Absolutely. Everything should have a unit tests.
How would you approach TDD for embedded systems?
I can’t speak to this directly, but here are some resources that may help:
When you have a set of requirements for an application, how do you select the first test to implement?
You select the first test to implement based on the code you want to write first. If you weren’t doing TDD, how would you decide which code to implement first? Then do the test for that code first.
What about testing code that requires communications, like sockets or serial port?
The best strategy is to use mock objects, fakes, or stubs to mimic external dependencies. These are used to isolate the code that you want to test.
How can I test a private method when the test is outside the class?
Either create an object of the class within the test or create a subclass within your tests.
Any guidelines on how to estimate the additional time TDD adds to a task?
It really depends on a couple things:
- Are you in a legacy system or starting from scratch?
- Does the developer have experience with TDD?
If the developer has TDD experience and is working with code that already has unit tests, it should take little additional time. The less experience and the more legacy code, the more additional time needed.
Are you aware of any studies that show how TDD affects maintenance code? I found a study from 2004 but was hoping for a newer one.
You can find a number of studies of TDD here: http://biblio.gdinwiddie.com/biblio/StudiesOfTestDrivenDevelopment
Regarding your example, what if you implemented the SCMWorkingDirList.cpp first and developed unit tests to test each function second. Would there be any difference in terms of code quality?
If you code first and write unit tests second, it’s not TDD. In TDD, the test drives the development.
When do you review code
If you are following an Agile methodology, there is no defined code review. Another piece of the Agile puzzle is pair programming. This takes the place of a formal code review. TDD does not mean that you can’t do a code review. Because the code is in flux, you should wait until all the necessary changes are complete.
If you are using TDD, can you create units tests other than those generated in development
TDD doesn’t mean that there can’t be any other unit tests. Doing TDD should mean that you already have a unit test for everything that you need one for, but there’s no restriction that you can’t add other ones. TDD is a technique used to write quality code, not a replacement for testing.
What about higher-level integration tests? How does that fit into the process? Is that something developers should do as they program?
How much integration testing is done during TDD is really a judgment call. Comprehensive integration testing is not a part of TDD, but some integration testing may be appropriate. Integration tests should be written and run successfully before you can consider yourself ‘done’ with a particular user story (or requirement for non-Agile folks), but are not necessarily a part of TDD. Remember, TDD is a technique used to write quality code and not a replacement for testing.
How painful is it to switch midstream from a non-TDD development approach to TDD, for mid- to large-size projects?
It depends on the code in place. If there are already unit tests, switching to TDD should be relatively painless. Otherwise, any changes made to existing code will face the same challenges as doing TDD in a legacy system. If you’re going to do TDD with a brand new ‘piece’ that doesn’t touch existing code , then the time spent doing a technical design would be wasted, but switching to TDD should be equivalent to if you had started out doing TDD.
Are memory leakages and unused code also tested during unit testing?
If you are using TDD, you should only write just enough code so that your tests pass, so there should be no unused code. Unit tests are probably not the best place to test for memory leakages. Integration tests or functional tests that are testing a bigger slice of functionality are a more appropriate place for testing memory leaks. These can be done during development, but may also come later.
Is TDD applicable to web applications
A test should be written once the design of the system is clear. Or once the requirements are clear?
In Agile, there’s no ‘design’. User stories handle customer requirements. The tests should be written when development starts because the purpose of the tests is to drive the development.Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon
- Join Us for the Upcoming TDD 101 Learning Series
- Test-driven development with QA Wizard Pro and WPF
- Webinar Q&A: Understanding the Business Case for Agile
- Webinar Recording: 5 Ways to Boost Enterprise Agility
- Register Now! Five Ways to Boost Enterprise Software Development Agility with Seapine ALM Webinar