Posts

Showing posts with the label Test Automation

Test automation framework - do I need it at all?

Image
While there're lots of information published on the internet about how one should develop/buy a Test Automation solution, there's an elephant in the room - which is the fundamental question "Do I need to do it at all?". That is a good question. While I think that there’re some legitimate contexts where product/project/team can benefit from having a sophisticated test automation solution/framework, my general rule of thumb is “If you need a framework - you do it wrong” . There’re at least several decent papers showing the benefit of unit-testing ([1],[2], [3]), while I could not find any research proving that system level tests could bring better results. Please let me know if you can prove me wrong. There’s at least one half-scientific paper showing the benefits of unit tests in comparison to integration/system level tests [4] There’s a good paper bringing a notion of running/writing and maintaining test vs. the benefit of having/running it at

An alternative to ubiquitous UI-level checking - Subcutaneous tests

Image
Let's assume a hypothetical situation - you were assigned to a project to help with the "test automation" initiative. You have a huge "test plan" as an input, containing hundreds (if not thousands) of "test cases" and you need to do something about it, quick. Problem statement Your first urge (if it is a Web application) may be to write some UI-level automated checks using tools like, you know, Selenium. In fact, there's a huge demand for such "Selenium test automators" in the industry these days. But please, please, don't do this. There're lots of things which make UI-level automated checking the least desirable approach: UI-level automated checks will be slow. There's just no way to avoid it. You can parallel them or do some other tweaks to speed them up somehow, but they still will be slow. UI-level automated checks will be flaky. Partly - because they're slow. Partly because Web browser and UI interface w

A problem with Agile, automated testing and frequent releases

Intro I turn on my TV-set. I start my favourite TV application to watch a TV-show. It says there's a new version and insists on updating. Would I have access to new TV-shows or movies after this update? Not at all! Would this application work faster after that? Hardly. Would it be more stable? Hopefully, but no guarantees. What would this update give me? New UI (I was OK with the old one). Ability to choose which trailer I would like to watch (like I need more than one). It eats my internet traffic and time and gives me nothing of value in turn. I need to sort out my finances. I take my cell phone. I start an accounting application that works with my bank. It wouldn't start. Connectivity issue - it says. In reality - what I need is to go to Google Play and update the application. After the update, it looks slightly different, has some new feature I don't need and would hardly use and obfuscates previously learnt path to the features I need. The problem Both

Some tip to fix flaky Web UI tests

Test automation at the Web UI level (you know, Selenium stuff) is usually brittle and painful. It is usually also the only test-automation approach people are aware of/interested in (for whatever reasons). Below are several suggestions on how one can make her/his UI test automation at the Web UI level less painful and flaky. 1. Retry failed steps Each test consists of several steps. Unfortunately, those steps from time to time tend to fail with no apparent reason. Retrying a failed test step may save your whole test from failing. Helps with: Unresponsive, slow UI Elements being shown with a delay Wrong moon phase [1] Drawbacks: Sometimes it is just a waste of time, especially when the real issue is being "retried" Even worse: sometimes it may hide real issue that would go away after some time or page refresh 2. Retry failed tests From time to time - odds are just against you with a specific test run. Everyone, whoever had the misery of working with Web UI

Test automation framework architecture. Part 2.2 - Two-layered architecture

Image
As we've learned in previous posts of the series - Layered (also known as Tiered) architecture is a de-facto standard of the industry. There're two major variations of this architectural pattern that one should be aware of: 3-layered architecture (described here ) and 2-layered architecture . This post is going to concentrate on 2-layered architecture . 2-layered architecture is typically seen in UI-specific test automation frameworks. The key feature of this pattern is the absence of Business-layer, so most of the business-related logic are actually to be put into Page Objects. In order to avoid " The God class " anti-pattern and still be able to contain business-related logic in Page Object, one usually uses libraries or frameworks that allow simplifying DOM-elements location, thus letting Page Objects to concentrate less on locating elements and more on business-logic itself. The good thing in this architectural pattern is that it actually allows you

Agile, testing and more frequent deliveries.

Yet another time I think I see people using word Agile wrong. Yet another guy claimed "we need to have more predictability and stability so I think we would go for Agile methodologies, Scrum in particular". Nonsense. I don't think that "predictability, stability, whatevershitability" has anything to do with Agile. If you start Agile transformation with such a goal you've already failed. Agile is about maximizing value delivery rate in a state of uncertainty, that's pretty much what it is. If your business is uncertain as hell why do you think that "predictability" with engineering teams is going to fix things? It is not. Worse, if you make engineering teams predictable while having unpredictable environment, you will miss the whole point and make things worse while still thinking you're doing fine. Nonsense. An agile team is about removing management layers between engineering and customer, not making management happier.

Test automation framework architecture. Part 2.1 - Layered architecture example

Image
Let's consider an example of what Layered (Tiered) architecture may look like for a test automation framework. As a system under test, we're going to use this simple and neat "Todo list" application. First, let's do a brief analysis of the application. From my brief exploration I came up with the following checklist: Area/Group Things to check Comments Todos CRUD 1 Create item Basic scenario, UI-level case 1.1 Create an item with the same name The case is not valid. There's no validation for that - nothing to check really. 1.2 Create an item with an empty/not valid/malicious name While it would be a nice thing to test manually, from test automation standpoint I would expect this to be filtered on service or repository level, so either integration or unit test . 2 Mark item complete Basic scenario, UI-level case 2.1 Un-mark item complete Basic scenario, UI-level case 3 Delete item Basic scenario, UI-level case 3.1 Delete item

Why I would prefer writing 100 unit test to one UI test

Choosing between writing one UI tests and 100 Unit test I would always prefer the second option. Why? Well, I am actually lame with UI testing, but that not the main reason. The only thing I worry about is speed. How fast we can change software? Quality - I don't care about it much cause quality standards are contextual and can change. The only thing that is certain - there're going to be changes and we most likely don't know what those changes are about. So there's only one thing that matters - how fast we can adapt to this change. Internal quality is something that actually makes us able to change fast. UI tests do not contribute to internal quality. UI tests may even make things worse. That is the reason. And I invite you to Testcon Moscow to talk about this more - I am going to present some more thoughts on this topic there.

Manual and automated testing confusion

I am getting tired of posts about how "automated testing" going to replace "manual testing". Let me offer you a simple analogy. I have a car that has lots of useful self-check lights, like "check engine", "low gas", "low battery", etc. Automated tests are similar to those lights - if a light is on, most likely something is wrong and I need a mechanic to look at my car. Does having those self-check lights let me not to visit a mechanic for human-driven check yearly? They don't. Fact that self-check light is not on does not mean my car is OK, does it? Areas covered lots with self-check may require less time to check (as risk they would be broken is lower). However, there are lots of areas that is not feasible to cover with self-checks. There're some car parts I am not even aware of, while mechanic knows they weak points and can find an issue in a couple of minutes. It is possible to have only automated checks and not have

Two different views on Test Automation

Image
I have published several posts with the aim to deliver one message - sometimes it is more efficient (fast and convenient) to change the application under test (make it more testable or eliminate the need for testing at all) then invent or employ complicated test automation techniques to check the same functionality. Even though there was a lot of misunderstanding caused by badly forming those posts, I still think I stroke something deeper. For instance, this twitter post made me think that we speak two different languages: That's like asking a pharma company to self-certify that their drugs are safe without any independent approval! #softwaretesting #CIO — Ayush Trivedi (@ayushtrivedi) 4 September 2017 Now it started to seem to me that there're two different views on what test automation is. First, probably prevailing point of view is that test automation is a part of ages-old traditional QA process, where test automation specialist is just a test specialist usin

The broken concept of a Page object, or Why Developers Should Be Responsible For Test Automation

Image
Preface: I am in the middle of writing a series of posts about test automation frameworks architecture. I am still going to continue that series, even though this posts kind of devaluate the whole test automation framework concept a bit. Sorry for that, just can't stop ranting. Looking through the internet I spotted a couple of posts where some test automation specialists were talking about "page object" "pattern"/"model", as it was something special they had invented. Well, I also have something to say about "page object". “Java test automation engineers were told that there are different patterns than a PageObject” ( http://classicprogrammerpaintings.com/post/153817288474/java-test-automation-engineers-were-told-that ) "Page object" may be described as a pattern that allows us to decouple things you can do with the web page (external interface describing test/business logic) from the real implementation code you will h

Test automation framework architecture. Part 2 - Layered architecture

Image
Probably the most popular architecture pattern used for test automation frameworks (TAF) is layered architecture. This pattern is so well known that on job interviews for some companies when they ask you about TAF architecture you are supposed to describe this one. If you don't - they think you know nothing about the architecture altogether. I suggest you first read a brilliant description of the pattern at the O'Reilly web page , cause in this post I am going to describe the pattern in a way it is usually applied to build test automation solution. Usually, there're three distinct layers, which may have different names, but follow the same logic mostly. Sometimes those layers called test layer , business-layer and core layer , but there're no standard names really. Key rules for layered architecture are the dependency direction (each level depending on the level below) and call direction (no level can call/reference code described in the level above). The rough

Test automation framework architecture. Part 1 - No architecture

Image
Most of the "UI test automation" tutorials I have seen describe the Test Automation Solution where Selenium Web Driver is used directly from test methods and no additional abstraction layer exists. This "architectural pattern" is so ubiquities, that I decided to describe this as well. I think we can call this "architectural pattern" as No architecture . The structure of a test automation solution created with No architecture pattern presented (in a very rough way) on the picture below: Such test automation solution usually consists of some amount of test classes, each containing some number of test methods. All orchestration of interaction with System Under Test (SUT) is done either (using JUnit terms) in setUp and tearDown methods, or test methods themselves. Tool, whatever it is - Selenium, RestAssured, Selenide - called directly in test methods. Such an approach is the industry well-known anti-pattern. However, in some cases, it may be ok

Test automation framework architecture. Preface.

Some time ago I wrote this post describing my understanding (at that time) of the architectures used to create test automation solution (framework). While there's some information I still agree with, my understanding has evolved and I want to share this understanding with others. First, lets probably coin the terminology I use (which is not necessarily would be the right one - feel free to suggest something different) Test automation framework It is a framework that allows one to write automated tests. Usually one means the specialized framework, i.e. framework that is specialized for one or several related applications under test. A framework does not include tests themselves Test automation solution It is is a complete solution used for test automation. It includes everything needed to perform the automated tests, including tests themselves. A solution may be based on a framework, however, it is not mandatory. Architecture May be described as an imaginary model t

An effective test automation

So what it means to have effective test automation? Let's say I can automate 10 000 tests a day. Impressive number, isn't it? Does it mean that my test automation effective? Hardly we can tell without knowing other things. Turns out that test automation effectiveness has little to do with how many tests you can automate a day. To judge effectiveness we need to think about what value test automation adds. That leads me to a thought that test automation may be really the wrong term. Test automation is an activity, not a product. Even worse, the product that test automation creates (the value it produces) may be created by other means, too. What value test automation creates? Faster execution of 70 000 UI test scenarios? Don't think so. Nice test result report? Well, maybe, but not necessarily. My favourite one, "elimination of human error in testing". I will not even bother to comment. So the value it creates? My best guess nowadays would be that testing is a

Test needs model - when test and test automation meets

Image
So, you decide that your project needs test automation because manual testing is too long or too costly to address your needs. How would you approach that? In essence, there’re two opposite approaches to test automation. Let’s call them “test-first” and “automate first” Test-first approach  In this realm, you have the luxury of having somebody actually designing a test case for you so you can automate it. Quite often, this is a place where many “test automation” engineers would like to be. These means they are actually being developers because they have requirements (which is a test case) and a product (which is an automated test case). They also have a clear goal (automate that) and they are not responsible for the quality of the product under test. In contrast, they are responsible only for the quality of their automated tests. All things considered, this approach is actually quite sustainable from a quality control perspective. Provided automated tests are

Say Hello

Hey everybody. My name is Alexander Pushkarev (feel free to call me Alex). I've been working in IT roughly since 2006, and most of my work experience is about testing, automated testing, and test automation. Even though I do enjoy programming, I find the task of testing that thing was programmed well more challenging and exciting. Besides, I would rather do little thing well, that big thing bad, if you know what I mean. Currently, I mostly using Java for my work, but I also worked with .Net framework (and have warm feelings about it) and also I adore languages like Python or Kotlin, so chances are I will post most examples in any of these languages or platforms. My ultimate goal is to achieve an effective and feasible quality control approach for projects, and I am going to share my ideas with a wide audience. I do this both to be heard and to be criticized, so I could see and learn from my mistakes. That being said, I enjoy constructive conflicts, and if you see I am writin