Automated tests are a key component of CI (continuous integration) pipelines. They provide confidence that with newly added check-ins, the build will still work as expected. In some cases, the automated tests have the additional role of gating deployments upon failure.
With such a critical responsibility, it’s important that automated tests are developed to meet the needs of continuous integration.
In this video, Angie Jones (Applitools) and Jessica Deen (Microsoft) discussed key factors that should be considered when writing tests that will run as part of CI.
Angie and Jessica also showed a live demo of integrating frontend tests to your pipeline within an Azure DevOps build — so we can see all those theories come to life.
There are typically 3 types of automated tests that are run in a CI/CD pipeline
1. Sprint Level Tests
These are new automated tests that have been written to test the functionality of the Sprint. They should be contained in a separate test suite that runs once a day to ensure the newly implemented functionalities are working as expected.
They are later merged into the regression tests suite. In the acceptance testing phase, usually the teams have a high level acceptance test plan that ensure the critical functionality of the systems are still working as expected afer the new features have been merged into the main code branch. Also, the smoke and regression tests are run again in parallel.
2. High Level Smoke Tests
These high level automated tests run on every code check in to ensure the critical functionality of the system is still working as expected. This could be a mixture of UI, API and Unit Tests. The point of this test is to get quick feedback about the system and it usually should finish running within 5 – 10 minutes.
3. Daily Regression Tests
These tests are run to ensure the new code added to the system did not break existing functionalities. They are more detailed than smoke tests as they cover end-to-end flows through the system. They are usually run at least on a daily basis and probably multiple times before a release.
Throughout this process, teams may continue to do manual scripted testing, exploratory testing and/or risk-based testing depending on their level of continuous testing maturity, application complexity and risk tolerance.
Nowadays, all companies started moving towards more and more automation. It is essential to have a plan in place; otherwise, automation may fail.
Here’s a guide (by Ray Claridge) to making automation a success.
Business buy in – Before starting to automate, make sure you’ve got buy in from line managers and developers. Remember, automating is time consuming and will cost your company money to get off the ground.
Plan – Don’t just start automating random functionality, have a plan and document explaining the approach and how long each test will take to develop. Remember, get sign off from all parties involved.
Identify high risk areas – Automating a fully fledged system is going to take a long time. So do some analysis to identify the high risk areas such as: most used, high volume, security or transactional sections and focus on them first.
Identify areas less likely to change – Maintaining automation test scripts is not a five minute job, so don’t start on areas that are likely to change. Equally, don’t assume that functionality less likely to change doesn’t need testing. Past experience has taught me never to assume.
Document your tests – You need to this so that it’s clear to others what exactly the tests cover. Also handy if your automated product is not available or your tests are falling over.
Keep track of your test runs – Keeping a chart of all you your tests and tracking automated vs manual effort, gives visibility that you’re saving your company money. Also handy when trying to get buy in.
Keep it simple – Remember, tests should be simple so they can be re-used again and again. This keeps down the costs when maintaining and allows others to pick them up in the future, especially if you’ve got a contractor in to write the tests.
And lastly one for all the Product Mangers, Development Managers and Business Units – Don’t assume that because you’ve got someone writing automated tests that all your code quality issues are over. Remember – automation is only as good as the tests written!
BrowserStack Summer of Learning, a free 5-part webinar series designed to help QA and engineering teams of all sizes learn and scale test automation. Whether you are a beginner QA analyst dabbling with exploratory testing or a 50 releases-a-day veteran, Summer of Learning is a go-to for everybody.
The series begins with an introduction, first to Selenium and then to BrowserStack, and gradually takes you through the process of moving from manual to automated testing. It covers industry trends in testing and features testing stalwarts such as The Weather Channel, who release to millions of users each day.
BrowserStack Summer of Learning has the following episodes:
Episode 1 — The Basics: Getting started with Selenium: An introduction to Selenium, how to set up/write your first test scripts, and how to pick the right framework. This is a great introductory session for those looking to learn test automation in 60 minutes.
Episode 2 — Introduction to BrowserStack Automate: In this episode, you’ll learn how to set up and run your first test with Automate, how to test on various real devices and browsers on the BrowserStack Real Device cloud, how to test your local instance on the cloud, and how to collaborate and debug better.
Episode 3 — Continuous testing at scale: You’ll learn how to build an efficient, well-integrated CI pipeline that helps release quality software at speed. You’ll also learn how to use BrowserStack to deploy faster and listen to stories from great companies like The Weather Channel, who release to millions of users every day.
Episode 4 — Selenium + BrowserStack at scale: In Episode 4, David Burns, core contributor to Selenium will explain how to plan parallelization more effectively to achieve faster build times, the best ways to maintain test hygiene while scaling your team or automation suite, and how to monitor test feedback effectively.
Episode 5 — Testing for a mobile-first market: There are 9,000 distinct mobile devices in the market—and you most definitely can’t test on them all. But with this episode, you’ll learn the best strategy to pick the right devices for testing your website or mobile app.
This Video is technical and touches on topics such as test automation framework design, hermetic servers, Docker containers, architecture for testability, test environments provisioning, DevOps collaboration, testing when depending on internal and external services, the joys and pitfalls of parallel execution.
Every Dev and QA team wants to release software at speed. Parallel testing helps teams run a massive number of tests in minutes, reducing build times and enabling faster releases. However, in order to prepare for parallelization, you need a well structured test suite and a stable, scalable test infrastructure. In this webinar, David Burns will demonstrate how to optimize your test suites to make the most of parallel test execution.
From the webinar you’ll learn how to:
Effectively plan parallelization
Maintain test hygiene while scaling your parallels
Achieve faster build times by running tests in parallel
Simple retry and wait strategy, no need to graduate from any test-automation university to understand the difference between “implicit waits”, “explicit waits” and “fluent waits” 🙂
Test automation is critical to the DevOps pipeline. But its rate of adoption varies. And many teams still struggle to achieve successful test automation.
A couple of years back, the focus of Testing was “Shift Left”. It is still the focus for the companies who are not able to adopt the Shift Left practices.
Now after Shift Left / CI/CD, Automation tool companies are now looking how to bring intelligence in Test automation which can help in better test coverage and delivering quality at speed.
In this post, we will go over some intelligent features of Functionize and how they are using AI in bringing intelligence in automation and efficiency in Quality and Delivery.
Artificial intelligence ( Machine learning, Computer Vision & Natural language processing) can help in speeding up the automation test scripting, analysis and maintenance.
Below are some of the examples of how Functionize is using AI which helps in delivering Quality@Speed:
Faster test creation: Write tests in plain English (no, not Gherkin) and the framework with identify the objects by analyzing the DOM and perform actions. Example below:
Open xyz.com
Enter Username “abc”
Enter password “asdf123”
Navigate to Contact module.
Click in person record “Braidy”
Self Healing Scripts: Framework identifies the script failure reasons(due to change in object properties) and give your suggestions on fixing the scripts or fix the script by itself?
Autonomous Testing: All you need to do is – place a their tracking widget in your web app. Functionize will automatically create new test cases based on how live users interact with your site.
I am sure this short post will give you some ideas – how AI/ML can make our life easy and can help in delivering fast with quality.