Perceived quality level of a software may have dropped, but testing is not the answer.
"Modern software is of a lower quality that it was in a past".
Maybe. Perceived quality of software may have decreased, but I don't think that "more testing" is a proper solution.
More testing does not mean more qualityMore testing may find more issues, but not necessarily. Simply spending more time on the activity does not mean results would be better. And somebody needs to fix bugs, test bug fixes.
So we can't tell that more testing means more quality but certainly means bigger costs.
Software quality != No bugsBugs matter, but that's hardly the only factor to measure quality level. There are a whole bunch of other things that matter: UX, number of features, documentation, price, delivery model, cute logos...
Spending more time on testing may mean spending less time on these things.
Adequate quality levelSo it is obvious that one can't spend 100 years testing every possible case. I think that each product has some Adequate level of quality.
The adequate quality level is different for open-source/free/expensive-as-hell products. Test resources and budgets are (and should be) influenced a lot by the context. If bugs that drive you crazy are not getting fixed - that may be a conscious decision, not a lack of testing.
Software is getting cheaper and cheaper, prices dropped. Most of the software we use now is either cheap or free. It would be naive to expect that average quality level would be the same.
Instead of a conclusionI would love to know your answers to these questions:
- Would we prefer software with no bugs, which hasn't have features we need?
- Or reasonably buggy and cheap/free software which has features we need?
- Or maybe a software with no bugs, but which we cannot afford?
At the and of the day - it is all about money.