If you’re managing a testing project, you’ll usually need to be able to answer the following questions:
- How well do your test cases cover all the things you need to test?, including testing with different environment variables. When I refer to ‘environment variables’ I basically mean running the same test case in different environments (eg different hosting configurations, different browsers, OSs and RDBMSs, etc).
- What is the overall state of progress with test execution? Ie, how much testing has been completed and how far is there to go? Also, how far progressed are we with testing the relevant features (or whatever way you’ve decided to logically break down the testing)? How much testing have we done for the different environment variables we need to test with?
A couple of years back I developed a tool in Excel that can be used for the above. I think it has proven extremely useful on different test projects I’ve been involved in, and I think it’s likely that many people out there would find it useful too. Particularly if you don’t currently have a tool for these purposes or find the tool you do have is inadequate, for whatever reason. I think the tool has the following advantages:
- It requires minimal administrative overhead so testers can maximise the time they spend testing, rather than spending a lot of time entering data for test reporting purposes. I think testers are often forced to do this, which of course reduces the amount of time they can spend actually testing – this is something that should always be avoided. Test tools should, as far as possible, make a tester’s job easier, not harder.
- It not only provides very useful reporting (as described above) but also acts as a checklist enabling testers to easily see which test cases they’ve run and which they haven’t.
- You can use this tool regardless of where you keep your test cases – be it in the Windows file system, or some test tool, etc.
- It provides a good balance between being flexible whilst at the same time providing adequate structure and organisation.
So I’ll describe how it works at a fairly high level (another great advantage of this tool is that it’s done in Excel so you can very easily customise it anyway you want. Eg, you can easily introduce additional environment variables or calculate different statistics on the data gathered). First there’s a concept of Test Case Runs (TCRs). A TCR is a test case that is run with a unique combination of environment variables, and these can be whatever is relevant to you. Typically, these may be different browsers or OSs. So a single test case can have multiple TCRs.
So the spreadsheet contains a worksheet with columns as follows:
- in one column there are test case descriptions (which should give at least a basic idea of what the test case covers),
- Another column will point to the location of where the detailed test case is stored (this might be a URL or simply the name of the repository where the test case is kept, etc).
- A TCR column contains a unique ID number for each unique combination of environment variables you want to run the test case for. Eg:
|Test case description
||Test case location
||Test Case Run
|Attempt login with invalid password
For each of your environment variables, you have a list of values that is kept in a separate worksheet in the spreadsheet. You can then easily create listboxes in the columns for your different environment variables that contain the relevant values. Eg, in the cells in the Browser column above, you could have a listbox from which the user can select the values IE, Chrome or Firefox.
There are some big advantages to the above:
- Additional TCRs can easily be added just by inserting additional rows, and specifying a new combination of environment variables for those TCRs.
- As soon as new TCRs are added, relevant statistics (calculated in a separate worksheet in the spreadsheet and based on TCR data) are immediately updated (eg, you may want to report on the percentage of TCRs that specify each of the browsers you’ve defined. If you decide that you want to add a lot of TCRs for the Opera browser, then as soon as these are added, this is reflected in your reporting of (the following are just examples; exactly what you want to report on may differ. But the main thing is that the TCR structure described should enable you to derive whatever test execution or coverage statistics you require):
o Percentages of TCRs for different browsers (this will give an indication of the spread of testing across different browsers, and may alert you, eg, to a lack of coverage for a particular browser)
o Overall test execution progress (in the above example, this will be reduced to reflect the fact that you’ve added test effort to test against Opera)
o Execution progress against different environment variables (in the above example, when TCRs are first added for Opera, progress for this browser will show as 0%).
There is also a Status column for each TCR, each cell of which will have a listbox containing the values Pass, Fail or Blocked (this last is for TCRs that can’t be executed due to some issue). I’ve sometimes added a Deferred status as well (to indicate that the TCR has been deemed not to require execution before the next release of the application). If the Status is blank, then that just indicates that the TCR hasn’t been executed. So the combination of each environment variable and the Statuses enables reporting on things like the percentage of TCRs that have failed for a particular browser. Obviously, if you have large numbers of TCRs that are Blocked, this is also valuable information.
There is also a Tester column, containing listboxes with testers’ names or initials. So you can report on which TCRs have been executed by which testers, how many TCRs have been executed by each tester, etc. This can also be used for task allocation (eg, you can specify which testers are required to execute which TCRs just by putting their name against those TCRs using the Tester column.
You can also have a Feature column, the cells in which contain a listbox containing relevant features. This then enables you to:
- Sort TCRs by feature
- Report on numbers of TCRs per feature, percentages of TCRs passed/failed per feature, etc.
Some other columns that can or should be included in the TCRs worksheet:
- Date run (ie, when the TCR was run; this can enable calculation of statistics that give an indication of total number of days spent on a particular test project, etc)
- Bug or issue ID (this can be entered for failed TCRs, so you’ve got a trace to the relevant bug report for the failed TCR)
- Build number (ie the build in which the TCR was run)
What this tool has enabled me to get (and I’ve not seen another tool that can do this, at least not easily):
- A single percentage figure that indicates test execution progress for an entire test project (ie, it simply calculates a percentage based on the total number of TCRs and the total number of TCRs passed and failed).
- Execution progress per feature.
A tip: if you turn on sharing, multiple users can update the spreadsheet at once. And as it’s in Excel, you can easily copy and paste the file, or copy and paste data from the spreadsheet into another spreadsheet, both of which could be useful for many reasons.
Some might argue that the tool’s a bit crude; I’d argue it does as well as anything in giving you a reasonable indication of execution progress and coverage.
 I fully appreciate the argument by many testers that counting test cases has little value, however if, for example, you have a complicated new feature and only two TCRs defined for it, that’s a clear indication that you probably need to add more test cases. Also, if your test cases for a particular project are generally similar in structure, the number of TCRs per feature may be a reasonable indication of the level of test effort required for each feature.