Q1. What are containers?

Containers are used to provide consistent computing environment from a developer’s laptop to a test environment, from a staging environment into production.

Now give a definition of containers, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containerizing the application platform and its dependencies removes the differences in OS distributions and underlying infrastructure.

Understanding Parallel Testing in Test Automation (Part 1)

Parallel testing is an automated testing process that developers and testers can launch multiple tests against different real device combinations and browser configurations simultaneously. The goal of parallel testing is to resolve the constraints of time by distributing tests across available resources.

For example, if 20 test cases take a total of 100 minutes to complete, then 10 parallel execution could run 2 test cases each and bring the total testing time down to 10 minutes. Ideally, if you have sufficient resources, say 20 real mobile devices and real browsers for simultaneous execution of 20 test cases, then you’ll be able to significantly shrink the runtime to 5 minutes.


Benefits of Parallel Testing:

Speed: Sequential testing is time-consuming, while parallel testing will allow you to divide invested time by the number of environments. In order to test your application against ten devices, all you need to do is write only ONE script and run it against all your target devices, cutting your testing time by ten times.

Cost-Efficiency: The building, maintaining and keeping your own test environment up-to-date can burn a hole in your pocket. When it comes to parallel testing, maintenance isn’t a headache anymore — in fact, you lease the needed testing environment, always updated. Plus, cloud-based testing grids allow you to run tests at high concurrency, making the cost per test significantly lower.

Better Coverage: It’s always a good idea to run your application through as many platform-device-browser combinations as possible so that no bug sneaks in. Parallel testing will take your test coverage to the next level, giving you a significant ROI boost.

Optimization of Your CI/CD Processes: Parallel testing is the best friend of continuous integration and delivery. By testing in parallel, you can run tests as soon as developers submit new code updates throughout the entire SDLC. Timely reporting and quick feedback in parallel testing will also facilitate better communication between various departments.

Improvement of Testing Practices: Parallel testing improves the QA routine in your company. The reason is crystal clear: by testing at high speed, you can test more. This gives your QA team a chance to improve their testing practices and pinpoint bugs faster.

Read Part 2: Tips for Successful Parallel Testing

Source

Tips for Successful Parallel Testing (Part 2)

Read Part 1: What is Parallel Testing in Automation?

Here are some of the tips for Successfull Parallel Testing:

Tip 1: Test in the Cloud-Based Environment

Implementing a parallel testing strategy using in-house resources is one of the most typical mistakes. Building and maintaining your own infrastructure is not efficient. It is not just time- and cost-consuming – you won’t be able to make the most of parallel testing where the ability to test at high concurrency is a major advantage. Also, keeping your testing environment up-to-date requires a lot of resources. To this end, it’s a good idea to turn to cloud-based services that allow you to access the necessary devices at any time.

Tip 2: Avoid Dependencies Between Tests

Dependencies between different test cases are a primary reason why transitioning to parallel testing is so challenging for many teams. Simply put, when test cases are dependent on each other, you should run them in a particular order, which can destroy any parallel testing strategy. So, it is critical to creating your test cases to be atomic, independent from each other. Only then, you will be able to run them at any time and in any order, making your testing processes free of constraints.

Tip 3: Avoid Hard-Coding

Hard-coding is embedding data directly into the source code instead of generating it at runtime. This notion is an enemy of efficient parallelization since it creates dependencies between test cases, mentioned above. It is strongly recommended to avoid hard-coding values when scripting your test cases to ensure that each of your tests is self-sufficient and can be run whenever necessary. Use the data-driven approach to configure your test cases when they are running in parallel.

Tip 4: Manage Test Data Efficiently

Efficient test data management is the key to successful parallel test execution. But first, you need a strategy. There are two components you might want to add to it: a creational strategy (which creates test data needs) and a cleanup strategy (which cleans it up). Still, the only thing that matters is that your strategy is adapted to your particular case. The following ones are quite basic, and perhaps that’s what makes these data management strategies so effective:

  • elementary approach, which has no creational and cleanup strategies
  • refresh-your-data approach, which resets your source code in between test executions but has no creational strategy
  • selfish data generation approach, which has a creational strategy but no clean up one.

These are the most basic strategies. You can mix and match them to serve your own case or explore alternatives, such as generating other data or refreshing specific data.

Tip 5: Build Tests to be Run in Parallel

You don’t want to end up with a bunch of tests that can’t be run in parallel right before the release date. Planning correctly from the get-go will save you from nasty surprises when time’s up or when a deadline is due. Create your test cases with parallelization in mind should be your way to go. And believe us, it sounds harder than it actually is: test cases created for running in parallel are simpler, shorter and faster to build.

Source

Meeting CI/CD Requirements: Key Factors in Test Automation We Must Consider [Video]

Automated tests are a key component of CI (continuous integration) pipelines. They provide confidence that with newly added check-ins, the build will still work as expected. In some cases, the automated tests have the additional role of gating deployments upon failure.

With such a critical responsibility, it’s important that automated tests are developed to meet the needs of continuous integration.

In this video, Angie Jones (Applitools) and Jessica Deen (Microsoft) discussed key factors that should be considered when writing tests that will run as part of CI.

Angie and Jessica also showed a live demo of integrating frontend tests to your pipeline within an Azure DevOps build — so we can see all those theories come to life.

What is TestOps? TestOps – Chasing the White Whale by Ioana Serban

TestOps is the reclusive member of the DevOps family. Reports of sightings have been few and far between. Although we’ve heard stories, most of us don’t know anyone who’s actually seen it in the wild. I’ve turned to my personal experience as a tester to search for clues about what TestOps is. We’ll talk about what testers have to offer in a DevOps environment and look at the magical things that happen when the worlds of testing and operations collide.

There will be real-life stories, including: simulating an environment migration under production-like traffic (+ screwing it up and learning from it)choosing an AWS architecture with actual test data to compare performance for different instance types busting the myth that you can mitigate a DOS threat by scaling horizontally. Finally, we’ll dare to look into the future at what can happen when intentionally applying a TestOps mentality to our job and what regular teams can do to get there.

Key takeaways: How getting testers involved in DevOps brings actual value.Why applying a TestOps mentality to your work can open the door to a whole new type of testing.Getting there is not as hard as you think!

Author: Ioana Serba

Taking Control Of Your Test Environment

“Taking Control Of Your Test Environment” by Ioana Serban

Most of us have had to deal with red builds blocking our testing or have been told to test on flaky environments where half the issues you find would ‘never happen in production’. As a tester, I used to think this wasn’t my problem.

What happens though when a thinking tester decides this is her problem and wants to be part of the solution?

This talk exposes some of the possible causes why builds stay red or an environment is “flaky”. For instance: 1. There are bugs in your build. 2. You are dependent on a third-party system that is not functioning correctly. 3. Your deployment may have gone awry, something may be missing. 4. Your environment is not set up in a consistent way.

We’ll look at some approaches that target each of these causes and show testers how they can acquire the skills necessary to take control of their test environment.

In case of bugs in existing functionality, you need to ask yourself: Are you running automated checks against the build? If yes, either you don’t have the right checks in place, are ignoring failed checks, or, even worse, the issue is intermittent. Testers that seek out a deep technical understanding of their product can be capable of chasing an issue down the whole technology stack without relying on a developer.

Stubbing out a third-party service can counteract uncertainty about the functionality of your own product. To deal with the real issue though, a tester can communicate directly with the third-party team, providing information to and from both sides.

Testers can get involved with the automation of both the deployment and environment setup, which are traditionally Operations roles. This is also often the realm of “magic scripts” that are not considered part of the deliverables and are not properly tested. A tester’s input can be very valuable here.

Speaker Bio: Ioana Serban started working as a Software Engineer in Test in 2011. With a strong leaning towards the more technical side of testing, she’s interested in learning all aspects of the craft and is a big fan of challenging assumptions of what a tester is or isn’t “supposed” to do. Ioana previously worked for Adobe and is currently working for eBay as an embedded tester in an agile team. She Blogs at https://medium.com/@ioanasaysgo