This section describes how to maintain a test if it fails but the cause is unknown.
Table of contents
- Re-execute the failed test cases
- Fix a test case
- Tips: Recommended number of steps for easy maintenance
1. Re-execute the failed test cases
1-1. When a single run fails
If a test case failed in the test edit screen, it is necessary to fix the test case. Go to 2. Fix a test case.
For local execution, check the version of MagicPodDesktop or magicpod-command-line. (For more details, click here.)
1-2. When a batch run fails
If some test cases in a batch run fail, go to
If a batch run is executed and some test cases fail in the batch run,
execute only the test cases that failed in a single run or a batch run.
For detailed instructions on how to batch run specific test cases, please click here.
1-2-1. In case of failure
- The test case may have failed because it depends on another test case(that succeeded during batch run but was not executed this time). Execute the tests again considering the dependencies.
- The test case may have failed due to changes in the product under test or the way the locator was specified. Fix the test case.
1-2-2. In case of success
- The test case may be unstable. It is recommended to fix the test case to make it stable. It is also possible that the status of the test result is a success, but another unintended UI element has been manipulated.
2. Fix a test case
2-1. Check the failure message
The cause is explained in the error message, so fix it accordingly.
If the error message is not clear, a solution may be found by searching for that failure message on the help page.
Even if the status of the test result is a success, another unintended UI element may have been manipulated.
2-2. Check the test result capture
Check whether the screen in the result capture of the failed step is the same screen as when the test was created.
2-2-1. When it is a different screen
- The screen is different from when the test was created after logging in, even though you should not be logged in.
- Change the browser/mobile app launch options to remove the login information.
- Browser: Open browser / Mobile app: Launch app
- The dialog displayed on the screen is not as it was when the test was created, such as campaign information.
- Using the conditional branch command, add a process to close the dialog screen UI element if it exists.
- The screen is in a different position after being scrolled.
- When the scroll position is different from when the test was created, a UI element may not be found or another UI element may be manipulated.
- When a UI element is not found, add a Continue scroll / Continue swipe command to display the element and then operate it.
- When another UI element is manipulated, put a value in the locator that is unique to that UI element, such as xpath=//li[text()='list1'] instead of xpath=//li[1]. For example, if there are multiple li elements on a screen, even if xpath=//li[1] is specified for the locator, if the scroll position shifts, the first li element in that screen will be selected and another unintended li element will be manipulated. This is because when scrolling, elements that are not on the screen may not be included in the UI tree.
- The screen is in the step before the step that failed.
- It is possible that the previous step failed, so check one by one that the screen is the same as when the test was created, in the same way as for the previous step. Then compare the UI tree between creation and failure and check the details.
2-2-2. When it is the same screen
- ai locator is specified
- Change to another suggested locator, add a new locator, or temporarily hide that UI element and change it to another UI element. ai locators are very unstable compared to other locators and should not be used unless there is no other way or you are only testing on specific devices. If the proposal locator only has the ai locator and you do not know how to specify the locator, you can inquire about it from Inquiry about this UI.
- Failed to operate at the right time and accidentally failed.
- Put a command such as wait until the UI element is displayed/exists/matches, or wait for XX seconds with a fixed value, before the step that failed. Basically, it automatically waits until the screen has finished loading, but sometimes the UI element cannot be found due to insufficient loading.
- Reference: Wait command
- Put a command such as wait until the UI element is displayed/exists/matches, or wait for XX seconds with a fixed value, before the step that failed. Basically, it automatically waits until the screen has finished loading, but sometimes the UI element cannot be found due to insufficient loading.
- Otherwise, compare the UI tree between creation and failure for more information.
2-2-3. When the 400/500 error code is displayed in the screen
If an HTTP response status error code is displayed, the server under test may have been stopped for maintenance or other reasons at the timing of the test execution, or there may be a problem with the application under test. Please check with the developer.
- 400 error codes: 403 Forbidden, 404 Not Found
- 500 error codes: 500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable
2-3. Check a UI tree
Check using one of the following methods.
2-3-1. Check using "Show failure UI tree"
Click Show failure UI tree under the failure message to display the UI tree at failure in a dialog box.
Click on the UI element you want to operate on the capture on the left side of the dialog screen, and the corresponding element will be highlighted in blue in the UI tree on the right side. Check that there are no differences from the currently specified locator.
2-3-2. Register the result capture of the failed step as UI and check it
Move the cursor over the result capture of the failed step and click Register this as UI.
Then go to the test edit screen and select the UI named "Failure screenshot of test case run #**" from the UI list.
Once you have selected a UI, click on the relevant UI element on the UI and check that it does not differ from the locator you have specified.
2-3-3. Download and check the XML file
Open the inquiry screen from Inquiry about the test failure and download the failure_ui_tree.xml file.
Open the file in Visual Studio Code or similar, format the XML if necessary, and check that there are no differences with the locator you have specified.
2-3-4. (Browser test only) Check using the Developer Tools
Open the target page in a normal browser and start the Developer Tools from the Inspect menu on the right-click menu. Click on the arrow in the top left corner*① and click on the relevant UI element*②. The relevant element will then be highlighted in the html, so check that there are no differences from the locator you have specified.
Right-click the highlighted element and select Copy > Copy XPath to copy the XPath of that element so you can refer to it.
2-4. Fix the locator
If there is a difference between the locator and the specified locator, fix the locator or change the UI to a new one.
2-4-1. Include values that change with each execution in the locator
For example, if the locator changes with each execution, as shown below, the locator is different from the one used when the test was created, so it cannot be found and will fail. Therefore, it is necessary to change the locator from one of the proposed locators to another add a new locator, or temporarily hide that UI element and change it to another UI element.
- Based on Amount/DateTime
-
e.g.) xpath=//output[text()='$70.00']
-
- Based on value, id, etc. that may change
-
e.g.) xpath=//input[@value='mp62130']
-
2-4-2. Specify a locator with multiple applicable UI elements
- The screens where the number of elements increases with each execution
- If previously registered data remains, the number of elements on the screen increases and another UI element is manipulated. For example, in the case of a test to check whether the data is displayed in Register → List, if previously registered data remains, the data registered this time cannot be selected by the locator.
In that case, if you can reorder the data on the screen, then use the newest order and select the first one, as in xpath=//div[1]. - Alternatively, use the Store unique value generated based on the current time command to create a different variable for each run and register it with the string. In doing so, use the variable and put its name in the locator of the element to be checked.
- If previously registered data remains, the number of elements on the screen increases and another UI element is manipulated. For example, in the case of a test to check whether the data is displayed in Register → List, if previously registered data remains, the data registered this time cannot be selected by the locator.
- Changes have been made to the screen.
- For example, when the test was created, there was only one button element on the screen, so xpath=//button was fine for the locator, but if the screen subsequently changes and there are two button elements, the first button element is always manipulated. In that case, put a value in the locator that is unique to that UI element, such as xpath=//button[@class='back_top'].
2-5. Change the UI to a new one
If major changes have been made to the screen, rather than modifying the locator, change the UI and UI elements to new ones by either of the following methods.
2-5-1. Add the failed UI as a new UI to use
Hover over the result capture of the failed step and click Register this as UI.
Now that the UI has been added, go to the test edit screen and select the UI named "Failure screenshot of test case run #**" from the list of UIs.
After selecting the UI, re-select the relevant UI element.
2-5-2. Re-upload and overwrite the UI
Open the test edit screen, launch a device, and display the relevant screen.
Move the cursor over the UI currently in use from the UI list and click Reupload.
When you click OK in the confirmation dialog, the UI is overwritten and the UI elements selected for the step are automatically overwritten with the new UI elements.
Reference: In case there is a correction to the tested screen
If you do not know the cause of the test failure, please inquire by clicking Inquiry about the test failure or Inquiry about the test result.
Reference: Inquire about test failures
Tips: Recommended number of steps for easy maintenance
The recommended number of steps per test case is less than 200. (In browser testing, If the website under test has many UI elements, we recommend less than 300.)
Too many steps will make it difficult to isolate errors. It can also cause a problem with subsequent steps not being executed.