Failure and guilt
The engineering method failure mode and effects analysis (FMEA) can be usefully applied to software development, and helps take a risk-based approach to software testing.
Over the past few years I have been working on projects alongside manufacturing engineers. I have been involved in collaborative working with component suppliers and with internal engineering processes.
One of the things from manufacturing engineering that I have found most valuable is the concept of failure mode and effects analysis (FMEA). This is a way of analysing what might go wrong before it happens so that you can do something about it, like change the design.
There is a lot written about FMEA, including its application to software. If you are not familiar with it, it is worth looking into.
I am certainly not an expert in FMEA, but from my simplistic understanding it analyses potential problems by looking at three factors: how probable the problem is; how severe the consequences of the problem would be; and how likely it is that the problem would be detected before things get of hand.
I really like this because it gives me a risk-based approach to understanding what testing I need to apply to different parts of a solution.
Because of our extensible data-driven development (XDDD) approach, our solutions have two parts: a set of data definitions, including scripts, that define a solution; and an engine that interprets the definition to run the solution.
The XDDD engine is like any other conventional software. We build automated regression tests for everything, and retest after every change, which is of course what we "should" do.
It is harder to know how to approach testing of the data definitions, including the scripts. These make use of code that has been fully tested as part of the engine. They run with the same restrictions as online users and cannot break data integrity and security rules.
We can and do apply automated regression testing to scripts that apply complex logic, and that works well.
But there is a lot that we don't test that way. This is particularly true for the overall solution structure and for scripts that build customised displays. Although I carefully check these and make sure they work, I have always felt a bit guilty about not testing this "properly".
FMEA has helped me understand this. There is a chance of errors in the structure or scripts. However, because these are limited in what they can do, the severity of errors is low. And because errors will be shown on the screen or obviously stop the solution working, there is very high probability of detection. If it looks like it is working, it is working, and not much else can go wrong.
This is similar to other data-driven systems. For example, if you were creating a spreadsheet in online Excel, would you create an automated regression test to make sure it was correct? Of course not, you would just check it and you would then be confident it would work.
FMEA has helped me feel less guilty about our approach to testing. XDDD insulates us from severe and hidden errors, and it makes sense to take a less formal and less time-consuming approach to much of the testing.