Sometimes we all want to convince this one customer. Nobody admits it, but you just have this feeling, you really want this one specific customer. Sometimes, to convince a new customer, we offer a free trial test case to demonstrate the power of TestResults.io. Exactly what we did in this case, but the results were more than unexpected…
For a provider of software solutions in the space of public administration we automated a short test case to test different versions of their webpage. In total there were 3 different versions online.
The most interesting part for the customer -or so he thought- was to find out, how TestResults.io would cope with a complete UI change (bigger symbols, different color, different positions, …). Due to our model-based approach, no problem at all. The customer could select the target version of his webpage when starting an execution and it was correctly executed using the appropriate model for the selected version.
Nothing unexpected until this point. Let’s get started with the unexpected part: After the automated test case was executed for the first time against the new, redesigned version of the webpage everybody was surprised that the test case returned a failed result, despite being executed with the new model. Thanks to the detailed logging and the screenshots of the execution, the problem was discovered instantly: The decimal separator in the application had changed inadvertently from comma (e.g. 19,30) to dot (e.g. 19.30). This was an unintentional change that came with the update and was not known by the customer yet. This little change would have broken every system that is using this output data as input. Ok, check, easily discovered with TestResults.io.
But it gets even better…
For the same test case TestResults.io found an even stranger problem that was not even the fault of the developers of the webpage. After the first executions of the test case where already forgotten a Chrome update was released. Naturally, the tests are also executed against the newest browser once they are available. Suddenly the test case failed. The screenshots and the logging both looked fine, so we tried to test the failing functionality manually, which did not work either. It turned out that due to the Chrome update, their special implementation of a filtered textbox did not receive correct focus on a click anymore and therefore it was not possible to input any text for the filter – not even manually.
Fancy, isn’t it? And all of that was found with a test case we did for free.
With todays dependency of software on third party services and integration in browsers or other systems, this kind of problems become more and more common. This requires a shift in our testing activities from verification of a release to monitoring of the application behavior in the environment where it will be used. Or even better: Do both with the same set of test cases. What do you think which other solution would have found these bugs?
TestResults.io. Automated Testing. Done.
Leave a Reply