This book is from 2003 so a bit dated but I read it for concepts. And I found plenty of them.
First of course is a handy way to remember the six specific areas to test - RIGHT -BICEP! Before you go trouping to the friendly neighborhood gym to lift the barbells, here is the breakdown -
Right . Are the results right?
B . Are all the boundary conditions CORRECT?
I . Can you check inverse relationships? For instance you might check a method that calculates a square root by squaring the result, and testing that it is tolerably close to the original number You might check that some data was successfully inserted into a database by then searching for it, and so on. Of course you have to guard against the possibility that there could be a common error in original routine and its inverse, thus giving correct results. So if possible, use a different source for the inverse test.
C . Can you cross-check results using other means?
E . Can you force error conditions to happen?
P . Are performance characteristics within bounds? e.g. time taken to execute method as data size grows. So execute method with data of different sizes and check that the time taken is within acceptable limits.
Right. Next, we all can vividly recall an incident or two when the developer forgot to test a boundary condition, resulting in much heartburn all around. So here's a handy way to get it right - the acronym CORRECT.
Conformance . Does the value conform to an expected format?
Ordering . Is the set of values ordered or unordered as appropriate?
Range . Is the value within reasonable minimum and maximum values?
Reference . Does the code reference anything external that isn't under direct control of the code itself?
Existence . Does the value exist (e.g., is non-null, nonzero, present in a set, etc.)?
Cardinality . Are there exactly enough values? 0-1-n rule
Time (absolute and relative) . Is everything happening in order? At the right time? In time? Are there any concurrency issues? What is the order of calling sequence of methods? What about timeouts?
Another important thing to be kept in mind is when to use mock objects. The book mentions list by Tim Mackinnon:
- The real object has nondeterministic behavior (it produces unpredictable results; as in a stock-market quote feed.)
- The real object is difficult to set up.
- The real object has behavior that is hard to trigger (for example, a network error).
- The real object is slow.
- The real object has (or is) a user interface.
- The test needs to ask the real object about how it was used (for example, a test might need to check to see that a callback function was actually called).
- The real object does not yet exist (a common problem when interfacing with other teams or new hardware systems).
The three key steps to using mock objects for testing are:
1. Use an interface to describe the object
2. Implement the interface for production code
3. Implement the interface in a mock object for testing
Then there are properties of good tests i.e. A-TRIP - Automatic, Thorough, Repeatable, Independent and Professional (encapsulation, DRY principle, lowering coupling etc.)
For those who are wondering about where to keep the test code, the author provides a few suggestions:
1. The first and easiest method of structuring test code is to simply include it right in the same directory alongside the production code. Though this will allow classes to access each other's protected members for testing purpose, it clutters production code directory and special care might need to be taken while preparing releases.The next option is to create test subdirectories under every production directory.This gets test code out of the way but takes away access to protected members. So you will have to make test class subclass of the class that it wants to test.
2. Another option is to place your Test classes into the same package as your production code, but in a different source code tree. The trick is to ensure that the root of both trees is in the compiler's CLASSPATH. Here the code is away from production code and yet has access to its protected members for testing.
Following advice needs to be kept in mind as you go about testing:
1. When writing tests, make sure that you are only testing one thing at a time. That doesn't mean that you use only one assert in a test, but that one test method should concentrate on a single production method, or a small set of production methods that, together, provide some feature.
2. Sometimes an entire test method might only test one small aspect of a complex production method.you may need multiple test methods to exercise the one production method fully.
3. Ideally, you'd like to be able to have a traceable correspondence between potential bugs and test code. In other words, when a test fails, it should be obvious where in the code the
underlying bug exists.the per-test setup and teardown methods and the per-class setup and
teardown methods.
4. When you find bugs that weren't caught by the tests, write tests to catch them in future. This can be done in 4 steps - Identify the bug. Write a test that fails, to prove the bug exists. Fix the code such that the test now passes. Verify that all tests still pass
5. Introduce bugs and make sure that the tests catch them.
6. Most of the time, you should be able to test a class by exercising its public methods. If there is significant functionality that is hidden behind private or protected access, that might be a warning sign that there's another class in there struggling to get out. When push comes to shove, however, it's probably better to break encapsulation with working, tested code than it is to have good encapsulation of untested, non-working code.
7. Make the test code an integral part of the code review process. So follow this order:
- Write test cases and/or test code.
- Review test cases and/or test code.
- Revise test cases and/or test code per review.
- Write production code that passes the tests.
- Review production and test code.
- Revise test and production code per review
8. While coding if you can't answer this simple question - how am I going to test this - take it as a signal that you need to review your design.
9. Establish up-front the parts of the system that need to perform validation, and localize those to a small and well-known part of the system.
10. Check input at the boundaries of the system, and you won't have to duplicate those tests inside the system. Internal components can trust that if the data has made it this far into the system, then it must be okay.
11. Cull out individual tests that take longer than average to run, and group them together somewhere. You can run these optional, longer-running tests once a day with the build, or when you check in, but not have to run them every single time you change code.
12. Unit tests should be automatic on two fronts - they should be run automatically and their results should be checked automatically. The goal remains that every test should be able to run over and over again, in any order, and produce the same results. This means that tests cannot rely on anything in the external environment that isn't under your direct control. Use Mock objects. Tests should be independent from the environment and from each other.
Finally, following list can be handy in checking for potential gotchas involving checked-in code:
- Incomplete code (e.g., checking in only one class file but forgetting to check in other files it may depend upon).
- Code that doesn't compile.
- Code that compiles, but breaks existing code such that existing code no longer compiles.
- Code without corresponding unit tests.
- Code with failing unit tests.
- Code that passes its own tests, but causes other tests elsewhere in the system to fail