Tuesday, May 12, 2015

Improving Your Test Language - Automation

We think in language, and we communicate using language. Well, we perform a set of string translations that give the affordance of some meaning to someone else. I often look to Michael Bolton for clarity in the way we speak and write about testing, and his recent TestBash video is directly on the subject. I thought, as I hadn't updated my blog in months, that I'd post a little about the disjointed way some people in the industry talk and write about testing and how you might avoid doing the same - or if you want to continue doing the same understand the pitfalls of the language you're using and what you might be communicating to other people. A fair warning: this won't be news to many of you.

Automation

There is no automated testing. Automated / Manual Testing is a long-standing and extremely adhesive set of terms, but they should still be treated with due care. Testing is a human performance and cannot (currently) be automated. Manual Testing really just means "testing with your hands". In terms of software testing it cannot mean "testing without tools", in the sense that tools are required to interact with the product. One might describe hands and eyes as tools, or the computer itself, the screen, the hardware, the operating system, peripherals, the browser, and so on. These are tools that help you (enable you) to test the software.

I think that to most people automation means having the computer do a check and report the findings (automatic checking). At some point someone needs to design the checks and the reports. Also at some point someone has to read the results, interpret them, and assign them meaning and significance. These are points of access for human interaction with the tools being used in an example of tool-assisted testing.

That point is important - tool-assisted testing. All software testing is tool-assisted to some degree - and when one realises this it no longer splits testing neatly into two boxes where one is manual testing (executing test cases and some of "The Exploratory Testing") and the other is automated testing (a suite of automatic checks). Unfortunately it's a bit more complex and complicated than that. We need to look at testing as humans trying to find information. From that starting point we introduce tools because their benefits outweigh their costs. Well-written and maintainable automatic check tools can be really useful, but we must remember their costs - not just in terms of up-front financial costs, time costs, opportunity costs, maintenance costs, training costs and so on, but because a tool is an abstraction layer between the user and the software that introduces abstraction leaks. Automation (automatic checking) does not "notice" things it hasn't been strictly programmed to notice. Nor does it use the same interface to the software as a human does (although hopefully it's similar in ways that matter). Nor can it find a problem and deep-dive - meaning that it cannot interpret a false positive or false negative result. Nor can it assess risk and make meaningful decisions. Nor much of anything a tester does. Nor can it do much that a tester cannot do with "manual tools" (tools that aren't automatic check execution systems) given enough time. An automatic check suite "passing" just means that the coded checks didn't report a failure, not that suitable coverage was achieved.

This should empower you. It breaks a dichotomy of monolithic terms into a rainbow of possibilities. This should give you free reign to consider tools in your testing to make your testing more powerful, more fun, more efficient, even make testing possible... at a cost that you've thought about. Some of these costs can be reduced in the same way that testing itself can be improved - modularity and simpleness. Choosing simple, small or established tools like PerlClip, LICEcap, logs and log alerting systems, and one of my favourites: Excel can give you flexibility and power. You can pick the tools that suit your context - your favourite tools for your testing will differ greatly to mine, and for each time you test. I might use Fiddler today, Wireshark tomorrow, and Excel the next.

You don't have to choose between "Automated Testing" and "Manual Testing". They don't really exist. Put yourself in charge of your own testing and leverage the tools that get you what you want. Remember, though, that the tools you've chosen to help your processes will affect the processes you choose.