Understanding Parallel Testing in Test Automation (Part 1)

Parallel testing is an automated testing process that developers and testers can launch multiple tests against different real device combinations and browser configurations simultaneously. The goal of parallel testing is to resolve the constraints of time by distributing tests across available resources.

For example, if 20 test cases take a total of 100 minutes to complete, then 10 parallel execution could run 2 test cases each and bring the total testing time down to 10 minutes. Ideally, if you have sufficient resources, say 20 real mobile devices and real browsers for simultaneous execution of 20 test cases, then you’ll be able to significantly shrink the runtime to 5 minutes.


Benefits of Parallel Testing:

Speed: Sequential testing is time-consuming, while parallel testing will allow you to divide invested time by the number of environments. In order to test your application against ten devices, all you need to do is write only ONE script and run it against all your target devices, cutting your testing time by ten times.

Cost-Efficiency: The building, maintaining and keeping your own test environment up-to-date can burn a hole in your pocket. When it comes to parallel testing, maintenance isn’t a headache anymore — in fact, you lease the needed testing environment, always updated. Plus, cloud-based testing grids allow you to run tests at high concurrency, making the cost per test significantly lower.

Better Coverage: It’s always a good idea to run your application through as many platform-device-browser combinations as possible so that no bug sneaks in. Parallel testing will take your test coverage to the next level, giving you a significant ROI boost.

Optimization of Your CI/CD Processes: Parallel testing is the best friend of continuous integration and delivery. By testing in parallel, you can run tests as soon as developers submit new code updates throughout the entire SDLC. Timely reporting and quick feedback in parallel testing will also facilitate better communication between various departments.

Improvement of Testing Practices: Parallel testing improves the QA routine in your company. The reason is crystal clear: by testing at high speed, you can test more. This gives your QA team a chance to improve their testing practices and pinpoint bugs faster.

Read Part 2: Tips for Successful Parallel Testing

Source

Tips for Successful Parallel Testing (Part 2)

Read Part 1: What is Parallel Testing in Automation?

Here are some of the tips for Successfull Parallel Testing:

Tip 1: Test in the Cloud-Based Environment

Implementing a parallel testing strategy using in-house resources is one of the most typical mistakes. Building and maintaining your own infrastructure is not efficient. It is not just time- and cost-consuming – you won’t be able to make the most of parallel testing where the ability to test at high concurrency is a major advantage. Also, keeping your testing environment up-to-date requires a lot of resources. To this end, it’s a good idea to turn to cloud-based services that allow you to access the necessary devices at any time.

Tip 2: Avoid Dependencies Between Tests

Dependencies between different test cases are a primary reason why transitioning to parallel testing is so challenging for many teams. Simply put, when test cases are dependent on each other, you should run them in a particular order, which can destroy any parallel testing strategy. So, it is critical to creating your test cases to be atomic, independent from each other. Only then, you will be able to run them at any time and in any order, making your testing processes free of constraints.

Tip 3: Avoid Hard-Coding

Hard-coding is embedding data directly into the source code instead of generating it at runtime. This notion is an enemy of efficient parallelization since it creates dependencies between test cases, mentioned above. It is strongly recommended to avoid hard-coding values when scripting your test cases to ensure that each of your tests is self-sufficient and can be run whenever necessary. Use the data-driven approach to configure your test cases when they are running in parallel.

Tip 4: Manage Test Data Efficiently

Efficient test data management is the key to successful parallel test execution. But first, you need a strategy. There are two components you might want to add to it: a creational strategy (which creates test data needs) and a cleanup strategy (which cleans it up). Still, the only thing that matters is that your strategy is adapted to your particular case. The following ones are quite basic, and perhaps that’s what makes these data management strategies so effective:

  • elementary approach, which has no creational and cleanup strategies
  • refresh-your-data approach, which resets your source code in between test executions but has no creational strategy
  • selfish data generation approach, which has a creational strategy but no clean up one.

These are the most basic strategies. You can mix and match them to serve your own case or explore alternatives, such as generating other data or refreshing specific data.

Tip 5: Build Tests to be Run in Parallel

You don’t want to end up with a bunch of tests that can’t be run in parallel right before the release date. Planning correctly from the get-go will save you from nasty surprises when time’s up or when a deadline is due. Create your test cases with parallelization in mind should be your way to go. And believe us, it sounds harder than it actually is: test cases created for running in parallel are simpler, shorter and faster to build.

Source

[ebook] Mobile App Testing Strategy Combining Virtual and Real Devices

Mobile Apps Require Rigorous Testing & Smart Strategies

But with such a fragmented landscape and so many device/OS combinations to test, as well as specific device capabilities, how can a team make sure they test effectively?

The way to ensure this is through a well-defined testing strategy in each step of the app development lifecycle. By combining real devices with virtual ones, teams can create a risk-based testing strategy to mitigate the risk of escaped defects.

Download this eBook to learn:

  • The main differences between virtual iOS/Android simulators and emulators and real devices.
    • OS version differences.
    • Platform capabilities and feature support.
    • Varying environment conditions.
    • Using each platform within the right SDLC phase.
  • The differences between iOS and Android virtual platforms.
  • How to build a testing strategy with both real and virtual platforms.
  • How to get started with Appium on real and virtual devices.

Click Here to Download this eBook.

Job Alert: Test Engineer (Automation) – Google – Bengaluru, India

Minimum qualifications:

  • Bachelor’s degree in Computer Science or equivalent practical experience.
  • 6 years of software development and testing experience.
  • Experience with the following coding languages: C, C++, Java or Python.
  • Experience with test methodologies, writing test plans, creating test cases and debugging.

Preferred qualifications:

  • Master’s Degree in Computer Science.
  • Experience with the following coding languages: JavaScript or Shell.
  • Strong development management or testing management experience with a proven track record in scaling highly technical teams.

Responsibilities:

  • Develop test strategies.
  • Automate tests using test frameworks.
  • Take responsibility for monitoring product development and usage at all levels with an eye toward improving product quality.
  • Create test harnesses and infrastructure.

Click here to Apply.

Meeting CI/CD Requirements: Key Factors in Test Automation We Must Consider [Video]

Automated tests are a key component of CI (continuous integration) pipelines. They provide confidence that with newly added check-ins, the build will still work as expected. In some cases, the automated tests have the additional role of gating deployments upon failure.

With such a critical responsibility, it’s important that automated tests are developed to meet the needs of continuous integration.

In this video, Angie Jones (Applitools) and Jessica Deen (Microsoft) discussed key factors that should be considered when writing tests that will run as part of CI.

Angie and Jessica also showed a live demo of integrating frontend tests to your pipeline within an Azure DevOps build — so we can see all those theories come to life.