An effective test automation

So what it means to have an effective test automation? Let's say I can automate 10 000 tests a day. Pretty big figure, ain't it? Does it mean that my test automation effective? Hardly we can tell without knowing other things. Turns out that test automation effectiveness has little to do with how many tests you can automate a day. To judge effectiveness we need to thing about what value test automation adds.

That leads me to a thought that test automation may be really a wrong term. Test automation is an activity, not a product. Even worse, the product that test automation creates (value it produces) may be created by other means, too.

What value test automation creates? Faster execution of 70 000 UI test scenarios? Don't think so. Nice test result report? Well, may be, but not necessarily. My favorite one, "elimination of human error in testing". Will not even bother to comment.

So value it creates? My best guess nowdays would be that testing is an activity within feature development process, and its' goal is to ensure that feature implementation adds more value then risk. Test automation is one of the tools to make testing activity faster, so you move quick.

If we take more concrete examples, I think test automation is effective if:
  1. It is properly distributed between levels (unit, integration, UI), so
  2. I takes reasonable amount of time to run, to
  3. Provide reasonably accurate quality feedback about the product

Properly distributed between levels (unit, integration, UI)

Well, pretty much everyone know knows about test pyramid.  The nice thing that comes as part of the parcel is that when you do add tests on all levels you also unintentionally create better designed and maintainable software. And it is also probably the only one scalable approach to assure that test run

Takes reasonable amount of time

Let's say less then 30 minutes. So it is possible to do run after each commit or merge to master, so tests are being run often. It is crucial to be able to run as often as possible so problems don't get accumulated and at any moment we have

Reasonably accurate quality feedback

Meaning that if we have failed tests it indicates product problem and if we don't have failed test it yet possible that product has some bugs for some peculiar scenarios, but is reasonably catastrophe-level bugs free and can be deployed on the spot at least to Beta environment.

This definition probably overly biased due to environment I worked and am working in, and also likely to change and evolve over time. I would be glad if somebody can give me alternative definitions.


Popular posts from this blog

Test automation framework architecture. Part 2 - Layered architecture

Test automation framework architecture. Preface.

Test automation framework architecture. Part 1 - No architecture