Dynamic loading of data complicates test automation
You have probably experienced a similar situation many times, especially if you are on a landline internet connection. Searching for top restaurants in your travel guide app close to you. You see the first entries, once applied the search. But the list is not finished it is still being updated and new elements appear. Then again the layout is rearranged, while new data is loaded from the backend. What is annoying for users, is even more challenging for automated test cases.
Imagine your automation is verifying that an entry is displayed in this list and interacts with it (e.g., open the details view). No matter what technology we use to identify the objects on the system under test (SUT), we need to wait until the element is loaded and displayed on the screen before we can interact with it. How long should we wait? How do we know the search is completed?
It gets even worse, there might be situations, where a UI-element is not yet fully active even though it is already visible on the UI and enabled in the DOM. So often we see automation engineers resort to a hardcoded, explicit wait or sleep statement, even though they are fully aware that this is bad practice. These statements wait for a defined period, no matter what. This makes the test case slow or, even worse, the test case becomes flaky. Depending on changing circumstances the wait may not have been long enough, and the test case aborts in some but not in other cases.
Because applications are constantly changing
Virtually every test automation framework or tool has a way to implicitly wait for an element before interacting with it. This is typically done with a retry mechanism that regularly checks if the element can be found. And then, continues with the interaction once it is. Only if the element cannot be found after a configurable timeout, an exception will be raised. This is to make sure that a test case can be executed even with slower load times of certain elements but still does not wait longer then needed between interactions.
As I described in the introduction – I am sure you have already experienced it by yourself if you worked on more complex automation project – this simple mechanism does not solve all problems. Particularly when identifying the elements visually, this approach often leads to problems when elements are animated, i.e., they are “sliding” into their final position. It can happen that the element is found on the screen at the very beginning of the animation, but when the framework clicks where the element was found, it has already moved away, and the click goes nowhere.
First requirement – You have to verify and retry everything
An effective way to fix this kind of problems, is to retry an interaction if it was not successful. After all, this is what a human user would do, if he notices that nothing happened after a click. As simple as it sounds, this is not easy to implement.
First, you need a way to decide whether an interaction was successful or not. And you also need a generic way to automate these retries, you certainly do not want to write out the full retry logic repeatedly. With TestResults.io a verification and retry logic is directly built in the structure of our Base Model. Read more about this in our blog about “Codified Expertise”.
Advanced Technique – Implement intelligent screen monitoring
Certainly, it is easier to just waiting until a loading animation or similar is completed. Continuing before completing the interaction with the element. Or on the other hand, there must be done a retry. However, TestResults.io applies a intelligent Screen Monitoring to make this possible, instead of a retry.
The test case recognizes once an animation or layout change is completed. With this technique the test case execution will wait until the screen is stable, i.e., there are no changes to the screen anymore, before interacting with an element. This ensures that any loading animation or layout change is complete before interacting with the element.
In other words, the same technology which detects changes can also actively wait for such a change. Not to mention, this is extremely useful to verify that an interaction is being performed by the SUT. In addition, that this happens before it continues with the next interactions. What is more, without the possibility to wait for an element that is used to check the success of our first interaction.
An example is the selection of a text entry field, where we need to wait until the keyboard focus has changed to the selected field before entering the text. In other words, by waiting for an update of the screen, that will be triggered when the highlight of the textfield is changed.
What you are looking for is a compromise between execution speed and stability
In Test Automation there is often a balance between execution speed and stability. Let’s say, we would just wait 20 seconds between interactions. In short, we would not have these problems of interacting with elements before they are ready, and the test case would run much more stable due to this.
But of course, this would push the execution times to the level of impractical. For the other extreme, we can execute the test cases as fast as possible and have essentially no delay between interactions. But this will lead to test failures and flaky test cases – in other words they would be useless. So, there is a compromise between these two properties that we need to find.
… that is automatically optimized by the test system you use
TestResults.io combines intelligent screen monitoring and implicit verification of every interaction. Furthermore, we can significantly increase the stability of automated test cases. Consequently impacting only minimal the execution speed.
For Interactive Visual Testing, we need a way to deal with modern applications. Those who dynamically display data and elements to the user and often include animations. Likewise this applies also for Selenium or other ID based testing systems.
The classical “WaitForElement” is not enough in many situations. On contrary, it needs a good structure of the test automation solution. Moreover, one that allows for generic retries. Just in case of an interaction failure. Furthermore, the usage of modern algorithms such as intelligent screen monitoring. Summarizing, that we can overcome this hurdle and achieve a stable and quick automation.
With this blog we are coming to an end of our series on Interactive Visual Testing. Our upcoming blog will give you a short overview of the topics:
we have covered within Interactive Visual Testing.