Posts

Showing posts from 2017

Manual and automated testing confusion

I am getting tired of posts about how "automated testing" going to replace "manual testing". Let me offer you a simple analogy. I have a car that has lots of useful self-check lights, like "check engine", "low gas", "low battery", etc. Automated tests are similar to those lights - if a light is on, most likely something is wrong and I need a mechanic to look at my car. Does having those self-check lights let me not to visit a mechanic for human-driven check yearly? They don't. Fact that self-check light is not on does not mean my car is OK, does it? Areas covered lots with self-check may require less time to check (as risk they would be broken is lower). However, there are lots of areas that is not feasible to cover with self-checks. There're some car parts I am not even aware of, while mechanic knows they weak points and can find an issue in a couple of minutes. It is possible to have only automated checks and not have ...

Perceived quality level of a software may have dropped, but testing is not the answer.

"Modern software is of a lower quality that it was in a past". Maybe. Perceived quality of software may have decreased, but I don't think that "more testing" is a proper solution. More testing does not mean more quality More testing may find more issues, but not necessarily. Simply spending more time on the activity does not mean results would be better. And somebody needs to fix bugs, test bug fixes. So we can't tell that more testing means more quality but certainly means bigger costs. Software quality != No bugs Bugs matter, but that's hardly the only factor to measure quality level. There are a whole bunch of other things that matter: UX, number of features, documentation, price, delivery model, cute logos... Spending more time on testing may mean spending less time on these things. Adequate quality level So it is obvious that one can't spend 100 years testing every possible case. I think that each product has some Adequate le...

Agile is not the goal, but means

Image
So we sit in the room and discuss the transformation plan for a company which was working in a waterfall manner for nearly a decade. The final goal is to switch to iterative development with somewhat small iteration (let's say, 3 weeks). The "only" problem we had is that we have 3 weeks if manual regression, and it does not really fit the plan. We have options: Long waterfall-like sprint, where the first half of the sprint we develop features, second half - test them Series of short sprints and one "hardening"/"release" sprint dedicated to regression when we finally decide to release things.  There's a third obvious option, which is to decrease regression time, but no-one seems to have knowledge about how it can be done in the next 5 years. So this gets dropped. "We need to have potentially shippable increment each sprint, "hardening" sprints are not Agile!" - says one man in a room. Well, that may be true. Maybe ...

Two different views on Test Automation

Image
I have published several posts with the aim to deliver one message - sometimes it is more efficient (fast and convenient) to change the application under test (make it more testable or eliminate the need for testing at all) then invent or employ complicated test automation techniques to check the same functionality. Even though there was a lot of misunderstanding caused by badly forming those posts, I still think I stroke something deeper. For instance, this twitter post made me think that we speak two different languages: That's like asking a pharma company to self-certify that their drugs are safe without any independent approval! #softwaretesting #CIO — Ayush Trivedi (@ayushtrivedi) 4 September 2017 Now it started to seem to me that there're two different views on what test automation is. First, probably prevailing point of view is that test automation is a part of ages-old traditional QA process, where test automation specialist is just a test specialist usin...

The broken concept of a Page object, or Why Developers Should Be Responsible For Test Automation

Image
Preface: I am in the middle of writing a series of posts about test automation frameworks architecture. I am still going to continue that series, even though this posts kind of devaluate the whole test automation framework concept a bit. Sorry for that, just can't stop ranting. Looking through the internet I spotted a couple of posts where some test automation specialists were talking about "page object" "pattern"/"model", as it was something special they had invented. Well, I also have something to say about "page object". “Java test automation engineers were told that there are different patterns than a PageObject” ( http://classicprogrammerpaintings.com/post/153817288474/java-test-automation-engineers-were-told-that ) "Page object" may be described as a pattern that allows us to decouple things you can do with the web page (external interface describing test/business logic) from the real implementation code you will h...

Test automation framework architecture. Part 2 - Layered architecture

Image
Probably the most popular architecture pattern used for test automation frameworks (TAF) is layered architecture. This pattern is so well known that on job interviews for some companies when they ask you about TAF architecture you are supposed to describe this one. If you don't - they think you know nothing about the architecture altogether. I suggest you first read a brilliant description of the pattern at the O'Reilly web page , cause in this post I am going to describe the pattern in a way it is usually applied to build test automation solution. Usually, there're three distinct layers, which may have different names, but follow the same logic mostly. Sometimes those layers called test layer , business-layer and core layer , but there're no standard names really. Key rules for layered architecture are the dependency direction (each level depending on the level below) and call direction (no level can call/reference code described in the level above). The rough...

Test automation framework architecture. Part 1 - No architecture

Image
Most of the "UI test automation" tutorials I have seen describe the Test Automation Solution where Selenium Web Driver is used directly from test methods and no additional abstraction layer exists. This "architectural pattern" is so ubiquities, that I decided to describe this as well. I think we can call this "architectural pattern" as No architecture . The structure of a test automation solution created with No architecture pattern presented (in a very rough way) on the picture below: Such test automation solution usually consists of some amount of test classes, each containing some number of test methods. All orchestration of interaction with System Under Test (SUT) is done either (using JUnit terms) in setUp and tearDown methods, or test methods themselves. Tool, whatever it is - Selenium, RestAssured, Selenide - called directly in test methods. Such an approach is the industry well-known anti-pattern. However, in some cases, it may be ok...

Test automation framework architecture. Preface.

Some time ago I wrote this post describing my understanding (at that time) of the architectures used to create test automation solution (framework). While there's some information I still agree with, my understanding has evolved and I want to share this understanding with others. First, lets probably coin the terminology I use (which is not necessarily would be the right one - feel free to suggest something different) Test automation framework It is a framework that allows one to write automated tests. Usually one means the specialized framework, i.e. framework that is specialized for one or several related applications under test. A framework does not include tests themselves Test automation solution It is is a complete solution used for test automation. It includes everything needed to perform the automated tests, including tests themselves. A solution may be based on a framework, however, it is not mandatory. Architecture May be described as an imaginary model t...

Agile team - rockband metaphor

I've been thinking about a proper metaphor about traditional (aka waterfall) and adaptive (aka agile) process and came to this - let's think about a rock band. Waterfall You can create a neat studio album using random people. First, you will record drums, then bass. After that, you can try different guitars... If the guitar sounds badly you will try to fix it or use something else. Then you will do mastering and mixing, adding effects and creating a nice cover. In a couple of months, you will have an album with a clean sound and neat content, great cover. The key is that here you should pay more attention to the process. You need to have a good plan (lyrics and music). This going to be lengthy but you will be fine even if bus guitarist decides to leave in the middle of the process - just arrange a replacement. In music, it works fine, but in business, time plays against you and plan may not be relevant. Agile You with your band are on a stage. You did have time f...

Industry Carelessness

Looking through the LinkedIn feed I see a horrible thing. I think we can call it "Industry Carelessness". I see lots of posts of the kind: Automating testing using XXXXium Digital transformation using microservices architecture (please download our brochure) We will transform your business, here's how And so on. This is very disturbing. It is like "we don't know what the problem you have (if any) but we have a solution already!" Moreover, I worked for a company whose whole business model was like that, which was delivering what it had in place instead of what was necessary. I interviewed employees who did not know where we were but already knew where we should go and how do we go there. "Ok, we shall do this and that". "Hold on a second, don't you want to know what we already do and what results we get? "Not necessary, the industry goes into that direction so let's do thing 'right'" And even worse, I wa...

An effective test automation

So what it means to have effective test automation? Let's say I can automate 10 000 tests a day. Impressive number, isn't it? Does it mean that my test automation effective? Hardly we can tell without knowing other things. Turns out that test automation effectiveness has little to do with how many tests you can automate a day. To judge effectiveness we need to think about what value test automation adds. That leads me to a thought that test automation may be really the wrong term. Test automation is an activity, not a product. Even worse, the product that test automation creates (the value it produces) may be created by other means, too. What value test automation creates? Faster execution of 70 000 UI test scenarios? Don't think so. Nice test result report? Well, maybe, but not necessarily. My favourite one, "elimination of human error in testing". I will not even bother to comment. So the value it creates? My best guess nowadays would be that testing is a...

Musing about ethics and software develpment

There's something that I haven't seen in the education plan of IT degrees ever - professional ethics, and I think it is a huge miss. Ethics is being taught for lawyers, MD, teachers and lots of other degrees. Ethics tells us that there's something beyond our job responsibilities. Ethics reminds us that the one who is footing the bill may not be the final decision maker of everything. That if something is legal, it yet does not mean it is a right thing to do. Let me share a story about one of my previous project. I was working for an IT services company and we were helping our client to deliver a new version of the software. A peculiar thing was that if there were a bug in the Product, then something horrible could happen (in the worst case - somebody could die). And I was the quality guy on a project. Our sponsor (the one, who was footing the bill, and ultimate decision maker) had his deadlines. He already made a demo for marketing people and wanted to ship the softwar...

Exploratory testing and bug hunting fun

Image
Being Developer In Test specialist, I do lots of coding, automating, meeting, refactoring, code reviewing and other related things these days. I do enjoy most of my daily activities, and I didn't think that being forced to do some manual testing stuff can be anything interesting to me. Chances turned, though, that I was the only QA-focused specialist in a team and we had to provide quality feedback on a product that had had little (close to not at all) unit test coverage. Being inspired by James Bach's and Michael Bolton's Rapid Software Testing methodology , I devised exploratory test session plan for the application, distributed activities between team members and did my best to find any possible issues in the application we worked on. Being overwhelmed by regular work activities, I couldn't even imagine how fun this bug hunt could be! I found quite a few issues, and I felt being House, M.D., detective Columbo, or somebody of a similar fashion. I was chasing a...