This guide introduces various implementation strategies for your application to improve the stability of automated tests and simplify test automation.
Table of Contents
-
Use Stable Locators
- Aim for Tests Succeed Anytime by Anyone
- Use "Wait" Commands
- Use "Conditional Branch" Commands
- Use "Retry on Failure" Feature
- Create Tests Based on User-Centric Scenarios
1. Use Stable Locators
Locators are strings used during test execution to identify target UI elements, such as id, XPath, or CSS selectors. MagicPod automatically assigns locators to UI elements, and users can add/edit locators from the suggested options.
When the application under test is updated, changes to the UI may cause test failures if elements can no longer be located using the existing locators. To avoid this, it is important to use stable locators that are less affected by changes in the screen layout.
1-1. Unique IDs
MagicPod strongly recommends using unique IDs as locators. Assigning a unique ID to each UI element during development ensures accurate identification of elements during testing, significantly improving test stability.
See this page for more on how to assign unique IDs.
Although adding unique IDs to an existing application may involve some initial cost, doing so contributes greatly to long-term test reliability and should be seriously considered.
1-2. Locators Except for Unique IDs
If assigning unique IDs is difficult, choose locators that are less likely to change. For example, for <a> tag, you can sometimes uniquely identify them using attribute values or inner text. A stable XPath expression like the following is recommended:
//xxx[@yyy='zzz']
This format combines the tag name (xxx), a specific attribute (yyy), and its value (zzz) to construct a stable locator.
1-3. Avoid Using AI Locators
MagicPod may display AI-generated locators as suggestions, but using them is not recommended. AI locators are unsupported in some commands and may compromise test stability and reproducibility.
Whenever possible, use explicit and stable locators like unique IDs or attribute-based XPath expressions.
2. Aim for Tests Succeed Anytime by Anyone
To create stable tests, they must be independent of execution timing or environment. Here are common cases where tests fail due to unstable environments:
- In an e-commerce site, purchasing the same product in each test run causes stock to run out, resulting in failure.
- Registering the same email address multiple times causes a "This email is already registered" error, leading to failure.
In such scenarios, running automated tests regularly becomes difficult. To achieve tests that succeed anytime by anyone, the following measures are effective:
2-1. Start Tests in a Clean Environment
Resetting the database to a clean state before each test run is critical to ensure stable test results. For example:
- Reset stock levels before starting tests on an e-commerce site.
- Delete all users before running user registration tests.
Running tests under the same conditions each time improves reproducibility and avoids inconsistent results. To initialize the database, we recommend using dedicated web APIs that can be called from test commands.
2-2. Eliminate Dependencies Between Tests
Each test should be independent to ensure stability. If one test depends on another's results or data, a single failure can cause cascading failures, making it difficult to identify the root cause. Common examples include:
- Performing login operations in a separate test case
- Using data created in a previous test
- Relying on a fixed test execution order
To solve this, ensure each test case sets up its own prerequisites. MagicPod's Shared Step feature lets you easily insert common procedures (e.g., login or data reset) into each test with a single step. This allows tests to be run independently in any order, simplifying failure analysis.
2-3. Use Unique Value Generation
The Generate unique value command is useful when you need to create a different account for each test run. For example, if you use the same account repeatedly in tests that involve user registration, you may encounter issues like the following:
- The account has already been registered, resulting in an error such as “This email address is already in use.”
- Repeated logins using the same account within a short period trigger rate limits, causing the automated test to be blocked.
To avoid these issues, use the "Store unique value generated based on the current time" command.
This command automatically generates a different value for each test run, allowing you to use it as a user ID, email address, or other unique data—enabling the test to run with a different account each time.
2-4. Adjust Application Implementation for Tests
In tests involving connections with external devices such as IoT appliances, the connection step can often be a barrier to automation, and in some cases, manual intervention may be unavoidable. However, by partially adjusting the implementation of the application for testing purposes, it is possible to enable automation even in such scenarios.
For example, suppose a smartphone app for IoT appliances includes a step to “turn on the air conditioner.” Under normal circumstances, this would require communication with the actual appliance to verify its operation. However, if a test mode is implemented in the app that skips real communication and instead returns a flag indicating a successful connection, the test can be completed without the external device.
In this way, even without fully replicating the real environment, you can expand the scope of test automation by modifying the application to retain only the processing necessary for confirming behavior.
3. Use "Wait" Commands
MagicPod provides various Wait Commands to help improve test stability.
With the Wait command, you can explicitly specify conditions such as “Wait until screen is rendered” or “Wait until UI element is enabled” before proceeding to the next action. You can also use the "Wait for fixed seconds" command to pause for a specified number of seconds. For example, here are some common use cases:
-
Stabilizing the "Assert there is no visual diff" command
Issue: The page loads slowly, and the "Assert there is no visual diff" command is executed before rendering is complete, causing false positives.
Solution: By inserting the “Wait until screen is rendered” command beforehand, the comparison is performed only after the page has fully rendered, improving test stability. -
Waiting for UI Element Activation
Issue: Right after launching the app, the "Tap" command is executed while the button is still inactive, resulting in a failed test.
Solution: Adding the “Wait until UI element is enabled” command ensures that the tap is only executed when the button becomes interactive, improving the test’s success rate.
Additionally, you can set the wait time for the entire test case through the Wait Policy setting. This allows you to adjust the wait time in case of issues. For more details, please refer to the page here.
4. Use "Conditional" Commands
MagicPod provides various Conditional Commands that allow you to divide the process based on specific conditions. By using this command, you can create flexible tests tailored to different situations.
For example, the Conditional Command is effective in the following cases:
-
Handling the login dialog that appears only on the first access to Google
→ Perform “Close” or “Skip” actions only when the login dialog is displayed. -
Handling permission dialogs shown at app launch
→ If the dialog appears, tap “Allow”; if not, skip the step. -
Closing pop-up ads that appear irregularly
→ Tap the “Close” button only when an ad is displayed.
5. Use "Retry on Failure" Feature
Temporary test failures caused by environmental factors—such as brief network disconnections or server issues—can be difficult to completely eliminate. To address these cases, we recommend using the "Retry on Failure" feature in MagicPod.
You can configure this feature in "Batch run settings" > "Advanced settings" > "Retry for failed tests". When enabled, tests that fail due to temporary issues will be automatically re-executed.
We recommend setting the number of retries to 1 or 2. Setting a high retry count is unlikely to significantly improve success rates and may cause persistent issues to go unnoticed.
6. Create Tests Based on User-Centric Scenarios
E2E testing with MagicPod is designed to verify whether the entire system behaves as expected by simulating the actual user flow. Therefore, its purpose is different from that of unit or integration testing.
We recommend focusing your E2E tests on “Happy Path” user stories—that is, scenarios where the user proceeds through the application under normal conditions. Including too many detailed UI checks or boundary value tests in your E2E cases can lead to overly complex test cases, which may reduce both maintainability and stability. These finer-grained validations are better handled by unit tests or integration tests.
Similarly, negative test cases (e.g., verifying error messages when invalid input is entered) are ideally covered in unit or integration testing. Running such negative cases frequently in E2E tests can increase failure points and compromise the overall stability of your test suite.