Posts

Showing posts from 2019

An alternative to ubiquitous UI-level checking - Subcutaneous tests

Image
Let's assume a hypothetical situation - you were assigned to a project to help with the "test automation" initiative. You have a huge "test plan" as an input, containing hundreds (if not thousands) of "test cases" and you need to do something about it, quick. Problem statement Your first urge (if it is a Web application) may be to write some UI-level automated checks using tools like, you know, Selenium. In fact, there's a huge demand for such "Selenium test automators" in the industry these days. But please, please, don't do this. There're lots of things which make UI-level automated checking the least desirable approach: UI-level automated checks will be slow. There's just no way to avoid it. You can parallel them or do some other tweaks to speed them up somehow, but they still will be slow. UI-level automated checks will be flaky. Partly - because they're slow. Partly because Web browser and UI interface w

There's no such thing as "Best Practice"

I don't usually write replies to someone else's articles or blog posts. But recently I stumbled upon an article, which I would like to reply to. The article named "10 Best Practices and Strategies for Test Automation" (original can be found here:  https://www.softwaretestinghelp.com/automation-testing-tutorial-7/ ) states: These strategies are taken from my own experience plus from the literature of testing gurus like Michael Bolton, James Bach and Cem Kaner. These practices should be followed in every automation project. Ironically, James Bach has a great post named "No Best Practice" ( http://www.satisfice.com/blog/archives/27 ), in which James puts his view related to "Best Practice" idea, which can be briefly summarized - There's no such thing as "Best Practice" . I wholeheartedly support this viewpoint. A similar idea is stated clearly on the very home page of Michael Bolton website ( https://www.developsense.com/ ). Simp

A problem with Agile, automated testing and frequent releases

Intro I turn on my TV-set. I start my favourite TV application to watch a TV-show. It says there's a new version and insists on updating. Would I have access to new TV-shows or movies after this update? Not at all! Would this application work faster after that? Hardly. Would it be more stable? Hopefully, but no guarantees. What would this update give me? New UI (I was OK with the old one). Ability to choose which trailer I would like to watch (like I need more than one). It eats my internet traffic and time and gives me nothing of value in turn. I need to sort out my finances. I take my cell phone. I start an accounting application that works with my bank. It wouldn't start. Connectivity issue - it says. In reality - what I need is to go to Google Play and update the application. After the update, it looks slightly different, has some new feature I don't need and would hardly use and obfuscates previously learnt path to the features I need. The problem Both

Some tip to fix flaky Web UI tests

Test automation at the Web UI level (you know, Selenium stuff) is usually brittle and painful. It is usually also the only test-automation approach people are aware of/interested in (for whatever reasons). Below are several suggestions on how one can make her/his UI test automation at the Web UI level less painful and flaky. 1. Retry failed steps Each test consists of several steps. Unfortunately, those steps from time to time tend to fail with no apparent reason. Retrying a failed test step may save your whole test from failing. Helps with: Unresponsive, slow UI Elements being shown with a delay Wrong moon phase [1] Drawbacks: Sometimes it is just a waste of time, especially when the real issue is being "retried" Even worse: sometimes it may hide real issue that would go away after some time or page refresh 2. Retry failed tests From time to time - odds are just against you with a specific test run. Everyone, whoever had the misery of working with Web UI