In the early days of Agile, methods such as Extreme Programming advocated for shipping without a QA phase. In fact, they often didn’t have dedicated software testers or even bug-tracking systems. And yet there are stories of these teams producing an order of magnitude fewer defects than normal. What did these teams do, and why did it work? And what role does that leave for QA? In an age where Agile is interpreted to mean “sprints” and “story points,” the technical side of Agile is often forgotten. This presentation discusses the technical underpinnings of Agile and how they lead to true business agility.
Ultimate Productivity Toolkit – 365+ Resources to be a Super Productive Tester
A tool has been a tester’s friend from day 1. Be it a tool to remember the passwords or to make notes, a good tester has always made the best use of different tools. We have tools for many activities. A good tester not just understands the importance of tools but also knows the limitation of each tool.
Just like there are no best practices, there are no best tools. There are tools suited for each activity and the same tool might not be the appropriate one in another instance.
It is therefore very important that a tester knows at least 2-3 tools from each category.
Rahul Parwal and Ajay Balamurugadas welcome you to enjoy our collection of tools along with an inspiring quotation on each page.
No matter your app sends transactional emails or your company sends marketing ones, you must test them before reaching real users. The notion that only marketers are responsible for email testing is not quite accurate since the template debugging, email infrastructure setup and analysis of other tech email aspects are duties of dev and QA teams.
In this post, Dmytro will explain why testers need to be aware of email testing as a part of mobile testing and what elements must be fixed.
3 Email Testing Pillars to Remember:
HTML templates need inspecting and debugging
Ensuring email deliverability is a must
Email infrastructure requires configuring and monitoring
Hey Guys, Lighthouse is an open-source, automated tool for improving the quality of web pages. You can run it against any web page, public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO and more. This tool has can make everyone’s life easy – Developers / SDETs / Product Managers.
You can run Lighthouse in Chrome DevTools, from the command line, as a Node module or you can also use https://web.dev/measure/ to measure the performance. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it.
As far as the webpage Performance Test is concerned, Lighthouse measures the following metrics:
First Contentful Paint: FCP measures how long it takes the browser to render the first piece of DOM content after a user navigates to your page. Images, non-white <canvas> elements, and SVGs on your page are considered DOM content; anything inside an iframe isn’t included.
Speed Index: Speed Index measures how quickly content is visually displayed during page load. Lighthouse first captures a video of the page loading in the browser and computes the visual progression between frames. Lighthouse then uses the Speedline Node.js module to generate the Speed Index score.
Largest Contentful Paint: Largest Contentful Paint (LCP) is an important, user-centric metric for measuring perceived load speed because it marks the point in the page load timeline when the page’s main content has likely loaded—a fast LCP helps reassure the user that the page is useful. The Largest Contentful Paint (LCP) metric reports the render time of the largest image or text block visible within the viewport, relative to when the page first started loading.
Time to Interactive: TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:
The page displays useful content, which is measured by the First Contentful Paint,
Event handlers are registered for most visible page elements, and
The page responds to user interactions within 50 milliseconds.
Total Blocking Time: TBT measures the total amount of time that a page is blocked from responding to user input, such as mouse clicks, screen taps, or keyboard presses. The sum is calculated by adding the blocking portion of all long tasks between First Contentful Paint and Time to Interactive. Any task that executes for more than 50 ms is a long task. The amount of time after 50 ms is the blocking portion. For example, if Lighthouse detects a 70 ms long task, the blocking portion would be 20 ms.
Cumulative Layout Shift: Cumulative Layout Shift (CLS) is an important, user-centric metric for measuring visual stability because it helps quantify how often users experience unexpected layout shifts—a low CLS helps ensure that the page is delightful.
CLS measures the sum total of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page.
A layout shift occurs any time a visible element changes its position from one rendered frame to the next.
Angie Jones and Applitools has recently launched a new initiative – Automation Cookbook. It has free bite-size recipes related to JS (#Cypress) & Java (#Selenium).
Here is a tip from cookbook: Uploading files with Selenium Java:
Code:
package file_upload;
import base.BaseTests;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.openqa.selenium.By;
public class FileUploadTests extends BaseTests {
@BeforeEach
public void launchApp(){
driver.get("https://kitchen.applitools.com/ingredients/file-picker");
}
@Test
public void testFileUpload() {
String filePath = "/Users/angie/workspace/recipes/resources/images/mac-n-cheese.jpg";
driver.findElement(By.id("photo-upload")).sendKeys(filePath);
}
}
Navigating backward in browser history driver.navigate().back();
Navigating forward in browser history driver.navigate().forward();
Refresh/ Reload a web page driver.navigate().refresh();
Closing Browser driver.close();
Closing Browser and other all other windows associated with the driver driver.quit();
Moving between Windows driver.switchTo().window(“windowName”);
Moving between Frames driver.switchTo().frame(“frameName”);
Drag and Drop WebElement element = driver.findElement(By.name(“source”)); WebElement target = driver.findElement(By.name(“target”)); (new Actions(driver)).dragAndDrop(element, target).perform();
You must have always craved for more logs from the Webdriver so that you can debug your scripts or may be log more information about your tests.
Here is your answer to it, WebDriverEventListner and TestNgListner(ITestListener Interface.)
In TestNg you can use Listeners in Annotation.
WebDriverEventListener – This is an interface, which have some predefined methods so we will implement all of these methods.
Difference:- TestNG Listener are triggered at test level such as before starting test after the test or when test fails etc; whereas WebDriver Listener are triggered at component level such as before click, after click etc