Do we know what testing is?

Lately I’ve been watching a bit of James Bach on YouTube. As I think I’ve said before, I’m a big fan of James and the way he thinks and talks about testing. If I had my way, every tester would be required to acquire at least some understanding of James’ philosophy on testing.

But to get to the point, after watching a talk James gave at the 2014 CAST, I began to think that most organisations, in my experience, just don’t understand what testing actually is. Most people think that test cases are written based on requirements documents, and that those test cases are ‘executed’ and that once all the test cases have been ‘executed’, testing is done. This is a shallow and simplistic view of testing – if your organisation ‘tests’ in this way, you will no doubt have huge room to improve the effectiveness and efficiency of testing.

Some of the things James said (I’m paraphrasing) that particularly resonated with me:

• We need to place far less emphasis on test cases, and think more in terms of ‘test ideas’.
• We should talk about ‘performing’ testing rather than ‘executing’ testing – this may seem a bit of hair-splitting, but it’s a very important distinction to make, if you really want to understand what skilled testers actually do.
• ‘Test automation’ can be a very misleading term, and if you have a say in running development projects, and think you can automate what human testers do, then you’re a danger to project success. Yes, you can automate binary (ie pass/fail) checks for a specific result, but you cannot automate the intellectual process of a skilled tester.

Here’s the video I’m talking about – if you’re a tester who’s really interested in your craft, I think this is absolute gold: https://www.youtube.com/watch?v=JLVP_Z5AoyM

Advertisements

The folly of ‘test planning’

It seems organisations have a rule that whenever there is a software project, a testing person must produce a document called a ‘test plan’. Very often, the person asked to write this ‘test plan’ doesn’t have much idea about what should be in it. So they use some sort of template ‘test plan’ document, and fill it in. So if the template has headings for ‘risks’ or ‘suspension and resumption criteria’, these will get filled out for the sake of filling out the document, rather than for any useful purpose.

James Bach is a man who talks a great deal of sense about testing, and I very much like his definition of a ‘test plan’: the set of ideas which guide the testing. Because this is actually what happens, regardless of what might be written in a ‘test plan’ document. Typically, ‘test plans’ are written because managers think you should have a document called a ‘test plan’. Very often, those managers don’t know anything about testing, but they like a tester to write one because it gives them comfort that the tester is prepared for the testing to come.

In fact, writing ‘test plans’ for the sake of writing them is often worse than useless, because it takes time away from what the tester should be doing: learning absolutely everything they can about the software to be developed, from technical and business perspectives. They may make notes as they do this, and may locate other documentation to help them, but the main thing they need to do is understand in depth what it is they’re going to be testing. If you’ve written a ‘test plan’ but don’t understand what you’ll be testing from business and technical perspectives, you’ve wasted your time.

What you often find is that a ‘test plan’ is written at a relatively early stage in the project, and very quickly becomes out of date because:

  • requirements are very fluid, and
  • The person writing it can have no real idea about where the risks are, or where the most bugs are going to be found.

This is where Agile makes sense, because it recognises the fact that detailed upfront planning and documentation is typically wasted effort, because requirements change and very many risks, obstacles and opportunities only become apparent once the software is being developed. So there is a need for much more recognition that Agile principles need to be extended to the testing process – ie, minimise documentation, maximise face-to-face interaction between testers, developers and business analysts, so that testers can develop a good understanding of what they’re going to be testing. That is documentation is useful, but must come second to working software.

A document that is much more useful than a project-specific ‘test plan’ is what I might call a ‘testing guide’. That is, a document that provides general guidance on how to approach testing, and testing methods. Such a document can be used on all testing projects, so you need not repeat a lot of things in ‘test plans’ for each project. For example, it can contain general rules for testing such as:

  • Editable fields should have reasonable limits on the length of values that can be entered in them
  • Controls should behave in ways that users generally expect them to behave, and should only behave differently if there are good reasons for doing so (this makes applications more useable, as users don’t have to relearn how to do the same things in different ways)
  • If a requirement is not specified, it is deemed not required from a testing perspective. With the exception of course of implied requirements – an obvious example of which would be: the specification does not say that when button X is clicked the application must not crash. Of course, just because the specification doesn’t say that, it doesn’t mean that the application must not crash. Rather, this is an implied requirement.
  • Just because something is different from what has been specified, it is not necessarily an issue. Always think: is the behaviour in question a problem? If not, it’s not an issue, and doesn’t need to be logged. On the other hand, if the behaviour is a problem, it is an issue and should be logged, regardless of what might be specified.

The above kinds of general rules would be useful on just about any testing project, but perhaps even more useful is to document guidelines that are specific to the kind of projects your organisation does. As I have done quite a bit of website testing, as an example I thought I’d document some general guidelines for website testing:

  1. Identify each type of page (or each ‘template’ page) that your website will have
  2. Identify the main elements of these pages, and consider which are the most complicated. Eg, are there elements that rely on data feeds to display certain information on the page? Clearly, elements like that should get more testing focus than just static headings and text.
  3. You can have a checklist of things to go through on each type of page. Eg:
    1. Check that all links are working correctly on each page type
    2. Check that all buttons work correctly on all page types
    3. Check for spelling and grammatical errors on all page types (though this would generally apply only to static text and not text that is editable via a Content Management System. This is on the basis that, generally, issues with ‘CMS-managed’ text are not a bug because that text is ‘configurable’ and not hard-coded. However, if there’s a risk that textual issues may end up ‘live’, they should be reported as bugs regardless).

When you’ve identified each page type on the site, the main elements on each of those pages, and gotten a handle on the level of complexity of those elements, you then have a framework and structure upon which to base your testing. It also helps you prioritise your test preparation: ie, which page types are the most important? Which elements on pages are most complicated and will likely need the most test effort?

It’s also worth bearing in mind that:

  • You might identify a need to test the same page type with varying content. For example, you might have a dynamic ‘progress indicator’ on a particular page type that changes based on the current system date and some date set via the CMS. In that case, you’ll need to test the same page type with varying content (in this case, progress indicator elements set up with different dates).
  • There may be elements that are common across different page types, so most probably, you don’t need to repeat all testing for those elements on different page types.

A tool for measuring coverage and execution progress

If you’re managing a testing project, you’ll usually need to be able to answer the following questions:

  • How well do your test cases cover all the things you need to test?, including testing with different environment variables. When I refer to ‘environment variables’ I basically mean running the same test case in different environments (eg different hosting configurations, different browsers, OSs and RDBMSs, etc).
  • What is the overall state of progress with test execution? Ie, how much testing has been completed and how far is there to go? Also, how far progressed are we with testing the relevant features (or whatever way you’ve decided to logically break down the testing)? How much testing have we done for the different environment variables we need to test with?

A couple of years back I developed a tool in Excel that can be used for the above. I think it has proven extremely useful on different test projects I’ve been involved in, and I think it’s likely that many people out there would find it useful too. Particularly if you don’t currently have a tool for these purposes or find the tool you do have is inadequate, for whatever reason. I think the tool has the following advantages:

  • It requires minimal administrative overhead so testers can maximise the time they spend testing, rather than spending a lot of time entering data for test reporting purposes. I think testers are often forced to do this, which of course reduces the amount of time they can spend actually testing – this is something that should always be avoided. Test tools should, as far as possible, make a tester’s job easier, not harder.
  • It not only provides very useful reporting (as described above) but also acts as a checklist enabling testers to easily see which test cases they’ve run and which they haven’t.
  • You can use this tool regardless of where you keep your test cases – be it in the Windows file system, or some test tool, etc.
  • It provides a good balance between being flexible whilst at the same time providing adequate structure and organisation.

So I’ll describe how it works at a fairly high level (another great advantage of this tool is that it’s done in Excel so you can very easily customise it anyway you want. Eg, you can easily introduce additional environment variables or calculate different statistics on the data gathered). First there’s a concept of Test Case Runs (TCRs). A TCR is a test case that is run with a unique combination of environment variables, and these can be whatever is relevant to you. Typically, these may be different browsers or OSs. So a single test case can have multiple TCRs.

So the spreadsheet contains a worksheet with columns as follows:

  • in one column there are test case descriptions (which should give at least a basic idea of what the test case covers),
  • Another column will point to the location of where the detailed test case is stored (this might be a URL or simply the name of the repository where the test case is kept, etc).
  • A TCR column contains a unique ID number for each unique combination of environment variables you want to run the test case for. Eg:
Test case description Test case location Test Case Run Browser OS
Attempt login with invalid password http://TestCaseLocation1 1 IE Windows 7
2 Chrome Windows 8
3 Firefox Windows XP

For each of your environment variables, you have a list of values that is kept in a separate worksheet in the spreadsheet. You can then easily create listboxes in the columns for your different environment variables that contain the relevant values. Eg, in the cells in the Browser column above, you could have a listbox from which the user can select the values IE, Chrome or Firefox.

There are some big advantages to the above:

  • Additional TCRs can easily be added just by inserting additional rows, and specifying a new combination of environment variables for those TCRs.
  • As soon as new TCRs are added, relevant statistics (calculated in a separate worksheet in the spreadsheet and based on TCR data) are immediately updated (eg, you may want to report on the percentage of TCRs that specify each of the browsers you’ve defined. If you decide that you want to add a lot of TCRs for the Opera browser, then as soon as these are added, this is reflected in your reporting of (the following are just examples; exactly what you want to report on may differ. But the main thing is that the TCR structure described should enable you to derive whatever test execution or coverage statistics you require):

o   Percentages of TCRs for different browsers (this will give an indication of the spread of testing across different browsers, and may alert you, eg, to a lack of coverage for a particular browser)

o   Overall test execution progress (in the above example, this will be reduced to reflect the fact that you’ve added test effort to test against Opera)

o   Execution progress against different environment variables (in the above example, when TCRs are first added for Opera, progress for this browser will show as 0%).

There is also a Status column for each TCR, each cell of which will have a listbox containing the values Pass, Fail or Blocked (this last is for TCRs that can’t be executed due to some issue). I’ve sometimes added a Deferred status as well (to indicate that the TCR has been deemed not to require execution before the next release of the application). If the Status is blank, then that just indicates that the TCR hasn’t been executed. So the combination of each environment variable and the Statuses enables reporting on things like the percentage of TCRs that have failed for a particular browser. Obviously, if you have large numbers of TCRs that are Blocked, this is also valuable information.

There is also a Tester column, containing listboxes with testers’ names or initials. So you can report on which TCRs have been executed by which testers, how many TCRs have been executed by each tester, etc. This can also be used for task allocation (eg, you can specify which testers are required to execute which TCRs just by putting their name against those TCRs using the Tester column.

You can also have a Feature column, the cells in which contain a listbox containing relevant features. This then enables you to:

  • Sort TCRs by feature
  • Report on numbers of TCRs per feature[1], percentages of TCRs passed/failed per feature, etc.

Some other columns that can or should be included in the TCRs worksheet:

  • Date run (ie, when the TCR was run; this can enable calculation of statistics that give an indication of total number of days spent on a particular test project, etc)
  • Bug or issue ID (this can be entered for failed TCRs, so you’ve got a trace to the relevant bug report for the failed TCR)
  • Build number (ie the build in which the TCR was run)

What this tool has enabled me to get (and I’ve not seen another tool that can do this, at least not easily):

  • A single percentage figure that indicates test execution progress for an entire test project (ie, it simply calculates a percentage based on the total number of TCRs and the total number of TCRs passed and failed).
  • Execution progress per feature.

A tip: if you turn on sharing, multiple users can update the spreadsheet at once. And as it’s in Excel, you can easily copy and paste the file, or copy and paste data from the spreadsheet into another spreadsheet, both of which could be useful for many reasons.

Some might argue that the tool’s a bit crude; I’d argue it does as well as anything in giving you a reasonable indication of execution progress and coverage.

[1] I fully appreciate the argument by many testers that counting test cases has little value, however if, for example, you have a complicated new feature and only two TCRs defined for it, that’s a clear indication that you probably need to add more test cases. Also, if your test cases for a particular project are generally similar in structure, the number of TCRs per feature may be a reasonable indication of the level of test effort required for each feature.

Automation’s great but…

People without experience of software development will think that software test automation is, unquestionably, a great idea. Computerise your manual testing and testing will be so much faster! Why have humans do what a computer can? Here are a few issues with test automation:

  • A programmed test can only ever follow exactly the same sequence: it has no intelligence and cannot detect anything it has not been told to detect. Typically, for example, there’s more than one way a particular function can be used: a human can quickly and easily try these different ways, an automated test can’t. Additional automated tests can be added that test more pathways, but that costs a lot more in time and money than the few minutes (or even seconds) it would take for a human to test the additional pathways.
  • An automated test expects one, completely specific result. If it does not get that result, it fails. So even if the result an automated test expects differs from the actual result, this is in no way a necessary indication of a bug.
  • Automated tests are typically fragile: again, they have no intelligence or common sense. What if, for example, when you do something in your web application, it displays a message saying that the browser pop-up blocker needs to be turned off? What if an automated test has been programmed in the expectation that the pop-up blocker has already been turned off? The test gets stuck. There’s actually no problem with the application under test (AUT); the automated test is just dumb. You can build ‘smarts’ into the test that allow for cases like this, but then the test becomes more complicated, which also tends to make it less robust.
  • The function being automated must work at the time of development of the automated test. How can you possibly program an expected result if the AUT is not currently producing the correct result? In the large, this precludes use of automation for a main part of testing: ie, testing of new features that, to a greater or lesser extent, are a work-in-progress.
  • Investment in test automation is generally high: the tool itself may be costly, and test automation developers are also relatively costly.
  • Generally, something that worked when it was used has a much greater chance of working when it is used again, than does something that is used for the first time. A good analogy I read somewhere is that of sweeping a minefield: if you’re looking for mines, you don’t go over ground you’ve already covered. An automated test goes over precisely the same ground each and every time it is run.
  • An actual test is one that targets a potential point of failure. If an automated test has run 100 times and never revealed a bug, do you really think it’s likely it’s going to reveal a bug the 101st time?
  • A human being must examine the results of automated tests and determine if failed tests were due to a problem with the test itself, or an actual bug in the AUT. If tests didn’t execute, you need to find out why, and potentially fix the tests. This can all be very time-consuming.
  • An automated test is only good as the human that developed it. An automated test may be shallow and a ‘pass’ of it may be of little value. For example, the automated test selects some text and clicks the ‘bold’ button. So long as the text can be selected, and the bold button can be clicked, the test passes. But what if the text was not actually bolded?: this test misses that bug. Of course, an automated test may simply expect a result that is wrong, in which case if it gets the result it expects, it produces a false positive.

 Lest I give you the impression that I think that automation has no value:

  •  Automation is capable of testing a very broad range of functionality relatively quickly and frequently. In fact, automation is capable of testing with a breadth and frequency that would otherwise be impossible. It may be costly in time and money to realise this capability, but as I said, automation can give you coverage that can’t effectively be obtained any other way.
  • Automation is good for picking up regression issues: if there’s no automation, regression issues are typically only picked up due to a happy accident.
  • Automation is typically costly to develop but if and when automated tests are running reliably, they can be executed with little cost.

Thoughts on test case design and structure

Sorry for offending anyone, but in my experience testers are generally pretty mediocre at designing and structuring test cases. Part of the reason is that, as far as I know, there aren’t any tertiary courses with much or anything about how to design and structure test cases. I think I’ve learnt some good ways to approach this and so:

  • You don’t need test cases for things you‘ll do anyway. Test cases are prompts to do things you may not have thought of, or may not remember (ie they can act as a checklist). For example, a command button is going to be added to a window. You don’t need a test case that checks if that button is there. Rather, what you need is a test case/s for what that button should do. Eg, if a value within a certain range should be returned when that button is selected, you need test cases for that. But whether the button is there or not will obviously be covered implicitly by those test cases. If it isn’t there, the tester will immediately identify that problem anyway; you don’t need a test case for that!
  • It is not the purpose of test cases to provide detailed instructions on how to use the application. That is the purpose of ‘help’ documentation. Test cases should assume that the reader is familiar with the application they’re testing and the terminology it uses. And if testers aren’t familiar with the application they’re testing they won’t test it effectively anyway. If a tester needs to know more about the functionality they’re testing, there needs to be separate documentation for that. Test cases need to be as brief as possible; whilst being understandable to someone that knows the application. Typically, test cases need to be developed quickly because functionality is complicated and schedules are tight. Testers should not effectively be required to write ‘help’ documentation as well as test cases. The job of testers is to analyse an application for potential points of failure, and test accordingly; they’re not technical writers. But I’ve seen, more than once, testers asked to write instructions on how to use the system in a typical test case format of execution steps and ‘expected results’. You don’t write ‘help’ documentation in a test case format. Doing so also confuses things because you’re counting as test cases things that aren’t actually test cases at all. People that tell you that anyone should be able to execute your test cases, whilst knowing nothing about the application they’re testing, are just plain wrong. They don’t understand testing. Testing can’t be done effectively just by following a script. You need to understand the purpose of the functionality you’re testing, in order to know if it’s working correctly or not.
  • What is the purpose of a test case? Its purpose is to target what, after analysis, is considered to be a potential point of failure in the application. A potential point of failure may, for example, be a point at which it’s considered a regression issue might arise due to some change in the application, or (of course) some newly added function may not work correctly. If you’re using an application but you don’t believe or don’t know if there is a potential point of failure in the functionality you’re using, you’re not testing and you are not actually executing a test case (even if the steps you’re following are documented in something called a ‘test case’). You might find an issue but it would only be by accident. Testing is intelligently targeting potential points of failure.
  • As a general rule, when structuring test cases, don’t worry about their order of execution. Just be concerned about identifying potential points of failure, and ensuring those are covered by adequate test cases. Trying to organise your test cases into the order in which they will be executed, is often difficult, and really just a distraction.
  • Give each of your test cases a unique ID number, and don’t change the ID number you give to a test case. This is so that if one test case needs to refer to another, you simply refer to it by the relevant test case ID, which won’t change. If you have say test case ID 2 and test case ID 3, and decide you want to add a test case between test cases 2 and 3, then you can insert the test case and give it ID 2.1. If you want to add another test case under that, it can be given ID 2.1.1 and so on. The important thing is that the test case ID numbers don’t change.
  • A useful way to structure execution steps in a test case is to separate ‘set-up’ steps from ‘test’ steps. If it’s thought that the tester needs to be told to do something before they can execute the actual test (which is what targets a potential point of failure), that can be covered in the set-up steps. For example a test case might say:
  • Set-up: identify or set up a customer that owes a debt.
  • Test: process a payment from that customer that is less than the amount owed.
  • Expected result: the amount owed by customer remains positive and is equal to the original debt less the amount paid.
  • Notice the following about the above:
  • The set-up step does not provide details about how to set up a customer with a debt; in fact, it says that there may already be one in the system, and you can just use that. This includes only what’s necessary, and reduces time to develop the test case.
  • The ‘Test’ step is the one that targets the potential point of failure, which in this case is that the application may not calculate the correct amount owed where the customer owes a debt that is partially paid.
  • Another benefit of giving test cases static unique IDs and separating out set-up steps (as above) is that if the needed set-up steps are contained in another test case, then all you need put in your test case is ‘Set-up: test case ID 4 has been executed.’
  • Don’t feel you need to have expected results or exact expected results for a test case. Very often, if you’re developing test cases for functionality that hasn’t been built yet, you won’t know precisely what the result will be. There may well be multiple results that could all be correct. For example, maybe the application should only allow digits to be entered in a field, but you don’t know precisely how the application will validate this. It might be that it just won’t allow entry of digits in the first place, or it might be that it returns an error message after an attempt has been made to enter digits. Either could be correct. Further, often it’s the case that if development of the application hasn’t started, the main reason you want to test something is that you’re not sure what the result should be. If that’s the case, you can’t give a precise expected result in the test case. Sometimes, it may be appropriate to refer the tester to the relevant part of a requirements document for the correct result, rather than effectively just repeat what has already been documented.

This concludes my thoughts on test case design and structure, for now.

Be Agile, but…

To many, software testing may seem a boring and unimpressive subject. We speak of ‘software development’, an integral part of which is testing, but when most people think of who does ‘software development’ they think of programmers, not testers. In other words, in application development, testing can have a relatively low status.

To my way of thinking, it concerns me that perhaps people in software testing get so focussed on testing methods, that they forget that ideally there would be no need to test because software would be built without defects. Therefore, of course, we must always be thinking about how we can avoid defects in the first place. Comically, to a degree, the introduction of defects is in the financial interest of testers and testing companies. Think of how much less effort would have to be devoted to testing if numbers of defects dramatically decrease! If this were to occur generally, imagine the concern of vendors of user interface automation tools! Some might argue that I’m too idealist, but it is illogical to aim for less than perfection. You will never get there, but that must be your goal.

To get to a more specific testing subject: I wanted to discuss testing within the now ubiquitous Agile methodology. Agile makes a lot of sense to me as it recognises realities such as:

• Attempting to document all the low-level requirements for a project upfront is a largely a waste of time. A great many obstacles and opportunities only become apparent as the software is developed.
• Development is unpredictable, and really accurate estimates of time and effort needed can only be given for small pieces of work.
• Developers and testers should mainly manage themselves: they are doing the work; they generally know the best way to do it. And people want to be efficient; no-one likes feeling that they’re wasting their time.
• When people working on complex projects are physically close to each other and verbally communicate a lot, they’re far more efficient and effective than if they don’t.

However, I think some of the ways that people think Agile should work are just plain wrong. For example, that you do not document detailed requirements, but rather these are agreed upon in collaborative meetings, and that testers’ acceptance test cases developed as a result of these meetings effectively become requirements. This may happen, but I think it’s a very bad way to go. Requirements and test cases are very different, and need to be documented in very different formats. Business analysis and testing are different roles, with different skill sets. Testers should not effectively become business analysts. A correct Agile process says that, yes, don’t develop detailed requirements upfront for the entire project, but you do develop detailed requirements for relatively small pieces of functionality. Of course, there is and must be a role for business analysts in development, be it Agile or otherwise.

A topnotch WordPress.com site