An effective test automation

So what it means to have effective test automation? Let's say I can automate 10 000 tests a day. Impressive number, isn't it? Does it mean that my test automation effective? Hardly we can tell without knowing other things. Turns out that test automation effectiveness has little to do with how many tests you can automate a day. To judge effectiveness we need to think about what value test automation adds.

That leads me to a thought that test automation may be really the wrong term. Test automation is an activity, not a product. Even worse, the product that test automation creates (the value it produces) may be created by other means, too.

What value test automation creates? Faster execution of 70 000 UI test scenarios? Don't think so. Nice test result report? Well, maybe, but not necessarily. My favourite one, "elimination of human error in testing". I will not even bother to comment.

So the value it creates? My best guess nowadays would be that testing is an activity within the feature development process, and its' goal is to ensure that feature implementation adds more value than risks. Test automation is one of the tools to make testing activity faster, so you move quick.

If we take more concrete examples, I think test automation is effective if:
  1. It is properly distributed between levels (unit, integration, UI), so
  2. It takes a reasonable amount of time to run, to
  3. Provide reasonably accurate quality feedback about the product

Properly distributed between levels (unit, integration, UI)

Well, pretty much everyone knows about test pyramid.  The nice thing that comes as part of the parcel is that when you do add tests on all levels you also unintentionally create better designed and maintainable software. And it is also probably the only one scalable approach to assure that test run

Takes a reasonable amount of time

Let's say less than 30 minutes. So it is possible to do run after each commit or merge to master, so tests are being run often. It is crucial to be able to run as often as possible so problems don't get accumulated and at any moment we have

Reasonably accurate quality feedback

Meaning that if we have failed tests it indicates product problem and if we don't have failed test it yet possible that product has some bugs for some peculiar scenarios, but is reasonably catastrophe-level bugs free and can be deployed on the spot at least to the Beta environment.

This definition probably overly biased due to the environment I worked and am working in, and also likely to change and evolve over time. I would be glad if somebody can give me alternative definitions.

Comments

Popular posts from this blog

Test automation framework architecture. Part 2.1 - Layered architecture example

Test automation framework architecture. Part 2 - Layered architecture

Efficient test automation - Secret #1 | Test Club