Cypress Testing Best Practices and Tips for Assertions Techniques

Cypress Testing Best Practices and Tips for Assertions Techniques

“Cypress Testing – Assertions Techniques Best Practices and Tips” focuses on enhancing the efficiency and effectiveness of test assertions in Cypress, a popular JavaScript end-to-end testing framework.

Cypress testing plays a crucial role in ensuring the reliability and correctness of web application tests. Developers and testers use these to validate expected outcomes, allowing them to assert conditions about the application’s state during test execution. Automation Testing.

We can summarise the key features of Assertions in Cypress Testing as:

  • Rich Assertions: Comprehensive checks for element properties (existence, visibility, text content, attributes).  
  • Seamless Integration: Assertions smoothly blend into test syntax, improving readability and maintenance.  
  • Automatic Retry: Robust handling of asynchronous tasks, minimizing test flakiness. 
  • Expressive Tests: Empowers developers to create clear, comprehensive, and efficient tests, boosting confidence in the testing process. 

Assertions in Cypress Testing:

Verify that an element exists in the DOM:

Syntax: .should(‘exist’)

Example:

Verify that an element does not exist in the DOM:

Syntax: .should(‘ not.exist ‘) 

Example:

Verify that an element is visible/Not Visible:

Syntax: .should(‘be.visible ‘)  .should(‘not.be.visible ‘) 

Example:

Verify that an element is hidden:

Syntax: .should(‘be.hidden) 

Example:

Verify that an element has the expected value that the user has entered in the textbox: 

Syntax: .should(‘have.value’, ‘expectedValue’)  

Example:

Verify that a string includes the expected substring:

Syntax: .should(‘include’, ‘expectedSubstring’)

Example:

Verify that a string matches a regular expression pattern:

Syntax: .should(‘match’, /regexPattern/) 

Example:

Verify the length of an array or the number of elements matched:

Syntax: .should(‘have.length’, expectedLength) 

Example:

Verify if the element is focused: 

Syntax: .should(‘have.focus’)
.should(‘be.focused’)

Example:

Verify the title of the page: 

Syntax: .title().should(‘include’, ‘Example Domain’)  

Example:

Verify the URL:

Syntax: .url().should(‘eq’, ‘https://www.spurqlabs.com’);  

Example:

Verify multiple assertions at a time: 

Example:

Property Assertion in Cypress Testing

Verify that an element has the expected attribute value:

Syntax: .should(‘have.attr’, ‘attributeName’, ‘expectedValue’) 

Example:

Verify that an element has a specific CSS property with the expected value:

Syntax: .should(‘have.css’, propertyName, Value) 

Example:

Verify that an element has the expected text content:

Syntax: .should(‘have.text’, expectedText) 

Example:

Verify that an input element has the expected value:

Syntax: .should(‘have.value’, expectedValue )  

Example:

Verify that a given value is NaN, or “not a number”:

Syntax: .should(‘be.a.NaN’) 

Example:

Verify an element or collection of elements is empty: 

Syntax: .should(‘be.empty’) 
   .should(‘not.be.empty’) 

Example:

Verify if a checkbox or radio button is checked:

Syntax: .should(‘be.checked’) 

Example:

Verify if a checkbox or radio button is not checked:

Syntax: .should(‘not.be.checked’) 

Example:

Verify if it is an array:

Syntax: .should(‘be.an’,’array’)

Example:

Verify if an object has specific keys:

Syntax: .should(‘have.keys’,[‘id’,’name’,’email’])

Example:

Verify if a value is one of a specific set of values:

Syntax: .should(‘be.oneOf’,[‘value1′,’value2′,’value3’])

Example:

Verify that a numeric value is within a certain range of another value:

Syntax: .should(‘be.closeTo’, expectedValue, delta)) 
      .should(‘be.within’, Start range, End range);

Example:

Verify Object assertion: 

Syntax: .should(‘have.property’, ‘propertyName’,’actualPropertyValue’)

Example:

Check is() block Assertions:

In the context of Cypress Testing, the .is() block typically utilizes conditions that check various states or attributes of an element. Here are some examples of selectors and conditions you might use inside the .is() block: 

  • Check if an element is visible:
  • Check if a button or input is enabled:
  • Check if an input field is readonly:
  • Check if an element contains specific text:
  • Check if an element has a specific attribute value:
  • Create custom conditions based on your specific requirements:

Conclusion:

Cypress Testing with its rich set of functionalities and integration benefits, empowers developers to create expressive and comprehensive tests. The combination of these features fosters a more efficient and confident testing process, ultimately contributing to the overall reliability of web applications.

How to Automate Chrome Extension using selenium?

How to Automate Chrome Extension using selenium?

Introduction to Automate Chrome Extension:

Over the years, the landscape of software testing has gradually developed from a predominantly manual testing phase to an increasing accentuation on automated/automation testing. In your career path as a test engineer, you will inevitably bump into automation testing. In the current landscape of the software industry, clients seek frequent and repetitive deployments. If you are in a role of Quality Assurance, you are likely to encounter and test systems needing frequent requirement changes or the rapid introduction of new and progressive requirements. Such a dynamic landscape calls for a constant adaptation to frequent code changes within stiff deadlines. A challenge that we can effectively address by adopting automation testing methodologies.

Why to Automate Chrome Extension:

We often use Chrome extensions in our daily activities, which is crucial for enhancing productivity. The repetitive nature of certain tasks associated with these extensions can become monotonous over time. This blog aims to guide you through the process of automating Chrome extensions and executing click actions using Selenium, a widely acclaimed open-source test automation framework introduced in 2004. If you find yourself needing to use a particular extension regularly, the conventional method involves manually adding the extension to Chrome and performing the same task repeatedly. This manual repetition not only increases effort but also consumes valuable time. Therefore, to streamline this process and save both manual effort and time, we present a precise method to automate Chrome extensions and configure them seamlessly for efficient use.

How to Automate Chrome Extension:

In this article, we will learn the process of Automate Chrome extensions and performing click actions using the Selenium WebDriver and about the Robot Class in Selenium. We will examine them in the Chrome browser using Java. Here we go !!

Before moving on to the main topic of our discussion, let’s quickly review the techniques we will use to Automate Chrome extension and conduct action.

Prerequisite:

TechnologiesDownload Link
IntelliJ idea IDEhttps://www.jetbrains.com/idea/download/#section=windows
Mavenhttps://maven.apache.org/download.cgi
Java JDK – 11https://www.oracle.com/in/java/technologies/javase/jdk11-archive-downloads.html
Cucumber-java – 7.11.0https://mvnrepository.com/artifact/io.cucumber/cucumber-java/7.11.0
Cucumber-core – 7.11.0https://mvnrepository.com/artifact/io.cucumber/cucumber-core/7.11.0
Selenium-java – 4.8.0https://mvnrepository.com/artifact/org.seleniumhq.selenium/selenium-java/4.8.0
Webdrivermanager – 5.3.0https://mvnrepository.com/artifact/io.github.bonigarcia/webdrivermanager/5.3.1
Mofiki’s Coordinate Finderhttps://www.softpedia.com/get/Desktop-Enhancements/Other-Desktop-Enhancements/Mofiki-s-Coordinate-Finder.shtm
Apache Maven – 3.9.0https://maven.apache.org/download.cgi

Implementation Steps to Automate Chrome Extension:

  1. Add Calculator extension to the local Chrome browser.
  2. Pack the extension and create a .crx file in File Explorer.
  3. Create a Maven project using IntelliJ IDE.
  4. Add dependencies in POM.xml file and Add .crx file in resources package.
  5. Create Packages and files in the project.
    • 5.1. Creating Features File.
    • 5.2. Creating Steps file.
    • 5.3. Creating Page Object Design Pattern.
    • 5.4. Creating TestContext File.
    • 5.5. Creating BaseStep File.
  6. Conclusion

I intend to use simplified language while articulating the concepts. So, let’s dive into the core of our topic, which is how to Automate Chrome extensions and perform click actions using Selenium.

To do this, we will follow a few rules, which I have depicted below as steps.

Step 1: Add Calculator extension to the local Chrome browser.

In this article, we are going to use the Calculator extension to Automate Chrome extension and perform an action on an extension out of the DOM element.

To add calculator extension to local Chrome browser –

After adding an extension, visit chrome://extensions/ URL from the address bar and then enable the Developer mode.

Also on this site, we can see our calculator extension which we just added.

On an extension, there could be an Extension ID. We have to note down this extension ID. In the next step, we will learn about generating a folder named extension ID in File Explorer.

In this article Extension ID is hcpbdjanfepobbkbnhmalalmfdmikmbe

Congratulations, we have completed our first step of Adding the Calculator extension to the local automate Chrome browser.

Automate Chrome Extension Img-1

Now let’s begin with the next step.

Step 2: Pack the extension and create a .crx file in File Explorer

Before continuing with the second step we will learn what a .crx file extension is.

What is a .crx file extension?

A Chrome extension file has a .crx extension. It increases the functionality of the Google Chrome web browser by allowing third-party applications to supplement its basic functionality.

Now, we will learn how to pack the calculator extension and generate a .crx file extension.

After adding the calculator extension to the local Chrome browser, the file explorer will generate a folder with the name extension ID (hcpbdjanfepobbkbnhmalalmfdmikmbe). 

Follow the provided path to locate the extension folder –

(to locate AppData we have to enable show hidden folders)
C:➜Users➜{UserName}➜AppData➜Local➜Google➜Chrome➜User➜Data➜Default➜Extensions➜hcpbdjanfepobbkbnhmalalmfdmikmbe➜1.8.2_0

In the extension folder, we will find the folder named Extension ID, which we have noted down here hcpbdjanfepobbkbnhmalalmfdmikmbe is the Extension ID for calculator extension. Open that folder.

In the folder, we can see a version folder of the extension. Open that folder ➜1.8.2_0

Now we have to copy the path as mentioned in below image –

Automate Chrome Extension Img-2

We will use this path to pack the extension in next steps.

Now, launch the Chrome browser and Visit chrome://extensions/ in the address bar

Automate Chrome Extension Img-3

Here we can see the pack extension option.

➜ Click on Pack Extension to automate chrome extension

After visiting the page we will be able to see the Pack Extension option as shown in the below image.

Automate Chrome Extension Img-4

Here we have to type or paste the path that we had copied.

➜Add copied path to the Extension root directory

In this step, we have to paste a copied path to the Extension root directory to pack our Extension and then we have to click on the Pack Extension Button.

Automate Chrome Extension Img-5

➜Copy the path of the .crx file

After clicking on the Pack Extension button a pop-up frame will appear. Here, we can see the path of the .crx file where it has been generated in File Explorer. Remember the path of the .crx file and click on the OK button.

Automate Chrome Extension Img-6

➜ Navigate to the .crx file in file explorer

Automate Chrome Extension Img-7

Now let’s navigate to the path of the .crx file as mentioned in the step above . Once we navigate to the path of the .crx file we can see the file has been generated. We have to use this .crx file in our maven project to display it in the selenium web driver and perform actions on it. 

Congratulations!! We have successfully generated a .crx file.

Step 3: Create a Maven project using Intellij IDE. 

Before creating a Maven project. Let’s understand what Maven is.

What is Maven?

Maven is a Java project management tool that the Apache Software Foundation developed. It is written in Java Language to build projects written in C#, Ruby, Scala, and other languages. It allows developers to create projects, dependencies and documentation using Project Object Model and plugins.

Why do we use Maven?

  • Maven is the latest build testing tool and a project management tool.
  • It makes the build process very easy (No need to write long scripts).
  • It has a standard directory structure which is followed.
  • It follows Convention over Configuration.
  • It has a remote maven repository with all the dependencies in one place that can be easily downloaded.
  • Can be used with other programming languages too, just not Java.

Hope, this now gives a clear view of Maven. Now let’s create a new Maven project using Intellij Idea IDE.

Open your IntelliJ IDE and go to the File New Project as shown in the below image.

Automate Chrome Extension Img-8

A new project pop-up will be displayed on the screen, and we must enter the project’s details here.

Automate Chrome Extension Img-9

Details required to create the Maven project are:

  1. Name: Provide a suitable name as per your requirement.
  2. Location: Choose the location where you want to store your project.
  3. Language: Choose the programming language as per your requirement.
  4. Build System: Here you have to choose Maven.
  5. JDK: Choose the JDK you want to use. (Note: Maven uses a set of identifiers, also called coordinates, to uniquely identify a project and specify how the project artifact should be packaged.)
  6. GroupId: a unique base name of the company or group that created the project
  7. ArtifactId: a unique name for the project.

Simply, click on the Create button and the Maven project will be created.

After successfully creating the project we can see the structure of the Maven project. Some default files have been created as given in the image below.

Automate Chrome Extension Img-10

Yes !! We have successfully created our Maven project. Let’s move ahead.

Step 4: Add dependencies in POM.xml file and Add .crx file in the resources package.

We shall include Maven dependencies in your project using IntelliJ IDEA. These dependencies need to be mentioned in our pom.xml file for our project build-up.

Below are the dependencies that we need to add to the pom.xml file.

  • selenium-java: Selenium WebDriver library for Java language binding
  • cucumber-java: Cucumber JVM library for Java language binding.
  • webdrivermanager: library to automatically manage and set up all the drivers of all browsers which are in test scope.

After adding dependencies in the pom.xml file we have to add the .crx file to the resources directory, .crx file is the file that we have generated in step 2.

To add the .crx file to the resources directory, copy the file from the file explorer and paste it into the resources directory. We can also rename the .crx file as we want. 

For renaming the file, right-click on the file ➜ select the refactor option ➜ then click on the rename option.

Chrome Extension Img-11

As shown in the above image, the rename pop-up will flash on the screen. Here we can give the file name as desired.

Here in this project, I am renaming the .crx file with the CalculatorExtension.crx file.

Step 5: Create Packages and files in the project to automate chrome extension.

After adding dependencies to the pom.xml file. We have to create a BDD framework that includes packages and files. Before moving ahead, let’s first get an overview of the Cucumber BDD framework.

What is the Cucumber Behavior Driven Development (BDD)Framework?

Cucumber is a Behavior Driven Development (BDD) framework tool for writing test cases. It is a testing tool that supports Behavior Driven Development (BDD). It offers a way to write tests that anybody can understand, regardless of their technical knowledge. In BDD, users (business analysts and product owners) first write scenarios or acceptance tests that describe the behavior of the system from the customer’s perspective. These scenarios and acceptance tests are then reviewed and approved by the product owners. The Cucumber framework uses Ruby as programming language.

To manage our code files for the project we need to create packages that are as follows: 

  • Features Package – All feature files are contained in this package.
  • Steps Package – All step definition files are included in this package.
  • Pages Package – All page files are included in this package.
  • Utilities Package – All configuration files are included in this package.

Now, we have to create a feature file,

5.1: Creating Features File: 

Features file contains a high-level description of the Test Scenario in simple language. It is known as Gherkin. Gherkin is a plain English text language

Cucumber Feature File consists of following components –

  • Feature: We use “Feature” to describe the current test script that needs execution.
  • Scenario: We use Scenario to describe the steps and expected outcome for a particular test case.
  • Given: We use “Given” to specify the context of the text to be executed. We can parameterize steps by using data tables “Given”
  • When: “When” indicates the test action that we have to perform.
  • Then: We represent the expected outcome of the test with “Then”

We need to add the below code in the feature file for our project.

According to the above feature file, we are adding two numbers. To open the Chrome WebDriver and add a calculator extension, we use a GIVEN file. With the use of ‘WHEN’ and ‘AND’ annotations, we are executing click actions on the calculator extension, with which we are adding two numbers from the calculator. In the final step, we are using the ‘THEN’ annotation to verify the result (the addition of two numbers).

5.2: Creating Steps file.

Steps Definition to automate chrome extension-

Step definition maps the Test Case Steps in the feature files (introduced by Given/When/Then) to code. It executes the steps on Application Under Test and checks the outcomes against expected results. For a step definition to execute, it requires matching the “Given” component in a Feature.

Here in the step file, we are mapping the steps from the feature file. In simple words, we are making a connection between the steps of the feature file and with step file. While mapping the steps we have to take care about the format of mapping the steps in step definition. We need to use the below format to map the steps for the feature we had created in the features file.

5.3: Creating Page Object Design Pattern

Till now we have successfully created a feature file and a step file. Now in this step, we will be creating a page file. Page file contains all the logic of the test cases. Generally, in Web automation, we have page files that contain the locators and the actions to perform on the web elements but in this framework, we are not using the locators because as we know extension is not in the DOM(Document Object Model) element as it is outside the DOM element. So we will only create the methods and for those methods, we will be using Robot class and X and Y coordinates.

Here in this code, we are performing the activities that are hovering by the mouse actions(move, press, release), clicking on the calculator extension, clicking on the two numbers from the calculator, clicking on the calculator’s “+” addition operator, and obtaining the result of the addition of those two numbers.

What is the Robot Class in Selenium?

Robot Class in Selenium is used to enable automated testing for implementations of the Java platform. It generates input events in native systems for test automation, self-running demos, and other applications where users need control over the mouse and keyboard. Selenium Webdriver was unable to handle these pop-ups or applications and extensions. So a robot class was introduced in Java versions 1.3 and above, that can handle OS pop-ups or applications and extensions.

Robots help in managing all the activities like performing the task within the specified time, handling the mouse functions and the keyboard functions, and many more

While we are using the robot class, it requires the x and y coordinates of the element of the screen on which we will be performing the actions i.e hovering the cursor and then performing click actions.To find the coordinates we are using the Mofiki’s Coordinate finder.

What is Mofiki’s Coordinate Finder?

Mofiki’s Coordinate Finder finds out the present x and y coordinates of our cursor by hovering the mouse anywhere on the screen with the help of the application Mofiki’s Coordinate Finder, which is available for free download. 

Steps to download and use Mofiki’s Coordinate Finder:-

Now to find the x and y coordinates move the cursor to the point and just press the space bar we can get the x and y coordinates

Chrome Extension Img-12

5.4: Creating TestContext File.

Now, In the Utilities package we have to create a TestContext file in which we can declare a webdriver. Declaring the webdriver as public allows initialization in every class file after inheriting the TestContext class. The step file and page file inherit the testContext class file. Also, we have declared Robot class here.

5.5: Creating BaseStep File:

This step is very important because we will be creating an environment file (i.e. Hooks file) and also we are using Chrome Options to add Calculator extensions.

Before moving ahead let’s understand about Before and After Hook and Chrome Options

What is Before and After Hooks?

Hooks allow us to better manage the code workflow and help us reduce code redundancy. We can say that it is an unseen step, which allows us to perform our scenarios or tests.

@Before – Before hooks run before the first step of each scenario.

@After – Conversely After Hooks run after the last step of each scenario even when steps fail, are undefined, pending, or skipped.

What are Chrome Options?

For managing different Chrome driver properties, Selenium WebDriver has a concept called the Chromeoptions Class. For modifying Chrome driver sessions, the Chrome options class is typically combined with Desired Capabilities. Eventually it enables you to carry out numerous tasks, such as launching Chrome in maximized mode, turning off installed extensions, turning off pop-ups, etc.

At this instant we have to create Before and After Hooks. At the same time each hook should contain a void method as shown in the below code.

In the Before Hook, we have to initialize the webdriver. Also, we have to add simple lines of code to add extensions to the webdriver. To add the extensions we are using Chrome Options to Automate Chrome Extension. Then in the After Hook, we are closing the webdriver.

Now, we have to create a Base Step which should have driver configuration and hooks

package Utilities;

Please find attached the GitHub repository link. I have uploaded the same project to this repository. I have also attached a Readme.md file that explains the framework and the different commands we have used so far in this project.

https://github.com/spurqlabs/ChromeExtensionAutomation.git

Conclusion:

It is a very difficult task to add an extension to a web driver and perform an action on extension icons. So basically, in this article, we have found a solution to add an Automate Chrome Extension to Webdriver and to perform a Click action on the extension icon apart from learning to Automate Chrome extension using the Selenium Webdriver.

The software testing landscape has evolved towards automation to meet the demands for quick and frequent deployments, adapting efficiently to constant updates and tight deadlines in a dynamic development environment.

How to trigger a workflow from another workflow using GitHub Action

How to trigger a workflow from another workflow using GitHub Action

GitHub Actions has revolutionized the way developers and testers automate their workflows. With Actions, developers can easily define and customize their CI/CD processes, enhancing productivity and code quality. One of the powerful features of GitHub Actions is the ability to trigger workflows from another workflow GitHub Actions. In this article, we will delve into the intricacies of mastering GitHub Actions and explore how to trigger workflows from other workflows.

GitHub Actions is a powerful automation framework integrated into GitHub. It allows developers and testers to define custom workflows composed of one or more jobs, each consisting of various steps. These workflows can be triggered based on events such as push and pull requests, commits, or scheduled actions. The benefits of using GitHub Actions include faster development cycles, improved collaboration, and streamlined release processes.

Before we delve into triggering workflows, let’s define what a workflow is in GitHub Actions. A workflow is a configurable automated process that runs on GitHub repositories. It consists of one or more jobs, each defining a set of steps. These steps can perform tasks such as building, testing, and deploying code. 

It is important to understand workflow dependencies to trigger a workflow from another workflow. Workflow dependencies refer to the relationships between different workflows, where one workflow triggers the execution of another workflow. By leveraging workflow dependencies, developers and testers can create a seamless and interconnected automation pipeline. 

In complex development scenarios, there is often a need to trigger workflows based on the completion of other workflows. This can be particularly useful when different parts of the development process depend on each other and when different teams collaborate on a project. By triggering workflows from related workflows, developers and testers can automate the execution of dependent tasks, ensuring a smoother development workflow. 

The advantages of workflow interdependency are numerous. Firstly, it allows for a modular and reusable approach to workflow automation. Instead of duplicating steps across different workflows, developers, and testers can encapsulate common operations in one workflow and trigger it from others. This promotes code reusability, reduces maintenance efforts, and enhances overall development efficiency. Moreover, workflow interdependency enables better collaboration between teams working on different aspects of a project, ensuring a seamless integration between their workflows. 

  • A GitHub repository having a workflow defined in it (repository_01)
  • Another GitHub repository (repository_02) has a workflow defined in it that triggers after repository_01 workflow completion.
  • GitHub personal access token 

As we have all the required stuff for our goal then let’s get it done. First will understand about GitHub personal access token.

Personal access tokens are an alternative to using passwords to authenticate GitHub when using the GitHub API or the command line. Personal access tokens are intended to access GitHub resources on your behalf.

To learn more about GitHub personal access tokens visit the official website of GitHub https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens

1: First, Access your GitHub account by logging in.

2: Navigate to your profile, click on “Settings,” and proceed to “Developers.”

3: Click on Personal Access Token and then Select Token Classic.

4: Navigate to and choose “Generate new token,” then select Generate new token Classic.

5: Here, we Include a note for your Access Token (PAT) – it’s optional. Choose the expiration date for your PAT. Select the scope and at last click on generate token. Copy the token and paste it on a notepad. 

(Remember the selected scope will decide the permissions and authorization to access another repository and workflow)

So now we need to add the generated PAT to our repository_01 as a secret to do this follow the below steps.

  • To navigate to your repository, you can click on the settings.
  • Then go to secrets and variables then select the Action button.

Select the repository secret, add PAT_TOKEN in the name, and paste the copied personal access token in the value. Click on Add Secret.

To create a workflow head over to the action tab and click on new workflow. Then select Set up workflow yourself. Now customize your workflow and add the below step to trigger the (repository_02) workflow.

Let’s understand the trigger-workflow02 stage. Following is the secret we have added is used here to provide the permissions and the authorization to understand and trigger the workflow_02 of repository_02 also replace the username with your GitHub username and repository_02 name with your other repository name. 

As our first workflow is ready now let’s create our second workflow for repository_02. Follow the same steps described in the above step for the creation of a workflow. 

Now let’s understand what to consider here, first the triggering event is set as repository_dispatch means when the other repository is completed this workflow will get triggered and now to specify which repository we arousing types as trigger-workflow02 which is defined as a stage in the workflow01

We are done this is how we can trigger the workflow02 of repository_02 when the execution of workflow01 of repository_01 is completed and the status is passed. Below are the output screenshots give it a check.

Till this point whatever we have seen it’s for our personal GitHub account and if we want to implement this concept for the organization’s GitHub account then we need to introduce a small change in the workflow01 of the repository_01.

Let’s understand the trigger-workflow02 stage. The secret we have added is used here to provide the permissions and authorization to trigger the workflow_02 of repository_02 also replace the organization with your organization’s GitHub name and repository_02 name with your other repository name. 

In this blog, we have explored the powerful feature of trigger workflow from another workflow using GitHub Actions. By understanding workflow dependencies, leveraging workflow events and triggers, implementing remote triggers, and building scalable workflow chains, developers can enhance their CI/CD processes and workflow automation. To summarize, triggering workflows from another workflow allows for increased reusability, collaboration, and customization of automation processes. By embracing these features, developers can optimize their development workflows and empower their teams to achieve greater productivity and efficiency.

Test Automation using Cucumber, JavaScript, and Webdriver.IO

Test Automation using Cucumber, JavaScript, and Webdriver.IO

Introduction:

In this blog, we have created the WebdriverIO framework, which will help to run test cases on web applications on different browsers. WebdriverIO is a popular open-source test automation framework for Node.js.Creating a test automation framework using Cucumber, JavaScript, and WebdriverIO offers several benefits that can streamline your testing process and improve the efficiency and maintainability of your automated tests. Here’s why you might want to consider using this combination:

1. BDD Approach with Cucumber:

Cucumber enables Behavior-Driven Development (BDD), allowing you to write test scenarios in a human-readable format.

2. JavaScript Language:

JavaScript is a widely used programming language for web development, making it accessible to many developers.

3. WebdriverIO as Test Automation Framework:

WebdriverIO is a popular JavaScript-based WebDriver framework that simplifies interactions with browsers and elements on web pages. It also provides a variety of built-in commands for browser automation, making test script development more efficient.WebdriverIO supports multiple testing frameworks, including Mocha and Jasmine, which can be integrated with Cucumber for BDD.

4. Cross-Browser Testing:

With WebdriverIO, you can quickly run your tests across different browsers and browser versions. This ensures that your application functions correctly and consistently across various browser environments.

5. Reusability and Maintainability:

The combination of Cucumber and WebdriverIO promotes the creation of reusable step definitions. Moreover, this modularity makes it easier to maintain test scripts and reduces duplication of code.

6.  Parallel Execution:

WebdriverIO supports parallel test execution, which can significantly reduce the overall test execution time.

7. Community and Support:

Both Cucumber and WebdriverIO have active communities, which means you can find a wealth of resources, tutorials, and plugins to enhance your automation efforts.

Let’s see how webdriverIO works and Its process:

Pre-requisites:

1. Make sure you have Node.js installed on your system. You can download and install it from the official website: https://nodejs.org/en/

2. Open your terminal or command prompt and create a new directory for your WebdriverIO project or else create a folder wherever you want & open it in VSCode

3.  VSCode

Initialize a new npm project by running the following command:  “npm init wdio .” This will create WebdriverIO packages and their installation.

Once you execute that command you will get the following message:

“Need to install the following packages:

  create-wdio@8.2.3

Ok to proceed? (y)”

If you proceed by pressing “y”, you will receive a list of instructions on how to generate the framework. You can follow these instructions to create the desired WebdriverIO framework.

Once you have completed the framework generation process, it will create a package.json file that will serve as a record of your project’s dependencies. This file will help you manage and keep track of the dependencies required for your project.

{
  "dependencies": {
    "@wdio/selenium-standalone-service": "^8.3.2"
  },
  "devDependencies": {
    "@wdio/cli": "^8.3.10",
    "@wdio/cucumber-framework": "^8.3.0",
    "@wdio/local-runner": "^8.3.10",
    "@wdio/spec-reporter": "^8.3.0",
    "chromedriver": "^110.0.0",
    "wdio-chromedriver-service": "^8.1.1"
  },
  "scripts": {
    "wdio": "wdio run ./wdio.conf.js"
  }
}

“Install WebdriverIO and its CLI tool by running the following command:

npm install webdriverio @wdio/cli –save-dev”.

This will install WebdriverIO and its CLI tool as dev dependencies and save them in your package.json file.”

Package-json:

package.json is a file used in Node.js projects that contains metadata and configuration information for the project, as well as a list of dependencies and dependencies required for the project to run. It is located in the root directory of the project and is used by package managers such as npm (Node Package Manager) to install and manage dependencies.

Wdio.conf.js: 

This file contains the configuration settings that define how the test automation framework runs and interacts with the web application being tested. It has Capabilities, Specs, Framework, Reporter, Hooks, Services, etc.

exports.config = {
    runner: 'local',
    specs: [
      './features/**/*.feature'
    ],
    // Patterns to exclude.
    exclude: [
        // 'path/to/excluded/files'
    ],
    maxInstances: 10,
    capabilities: [{
        maxInstances: 5,
        browserName: 'chrome',
        acceptInsecureCerts: true
    }],
acceptInsecureCerts: true   
  }],
    logLevel: 'info',
    bail: 0,
    baseUrl: 'https://www.calculator.net/',
    seleniumLoginUrl: 'https://demo.guru99.com/test/login.html',
    waitforTimeout: 10000,
    connectionRetryTimeout: 120000,
    connectionRetryCount: 3,
    services: ['chromedriver'],
    framework: 'cucumber',
    reporters: ['spec'],
    seleniumAddress: 'http://localhost:4444/wd/hub',
    cucumberOpts: {
        require: ['./features/step-definitions/*.js'],
        backtrace: false,
        requireModule: [],
        dryRun: false,
        failFast: false,
        snippets: true,
        source: true,
        strict: false,
        tagExpression: '',
        timeout: 60000,
        ignoreUndefinedDefinitions: false
    },
}

So here we have selected the “cucumber” framework which will help create test cases in BDD format. Before we go into framework details, you all should know that all WebdriverIO commands are asynchronous and need to be properly handled using async/await.

The Page Object Model (POM) is a popular design pattern used in software testing to represent web pages as objects and simplify the process of automated testing. The POM structure usually includes a “pageobjects” folder, which contains classes or files that represent individual pages on a website or application. These page object classes or files encapsulate the elements and actions related to a specific page, making writing and maintaining automated tests easier. By using the POM, testers can create a more organized and maintainable framework for their test automation efforts.

1)    Features

2)    Steps

3)    Pages

Features:


This folder contains another two folders, i.e., pageobjects, Step-definitions, and features files. The “feature” folder is typically used in the context of behavior-driven development (BDD) frameworks such as Cucumber, which uses a natural language syntax to describe test scenarios. The “feature” folder houses files that define the scenarios or features to be tested.

To create a feature file in VSCode for implementing behavior-driven development (BDD) scenarios using Cucumber, you can follow these steps:

·       Open VSCode and navigate to the folder where you want to create the feature file.

·       Right-click on the folder, go to “New File”, and click on it to create a new file.

·       Give the file a name with the “.feature” extension, for example, “login.feature”

Feature: Checking calculator functionality 

Scenario: Verify addition on calculator
Given User is on the calculator page
When User taps on "4"
And User taps on operator
And User taps on "5"
Then User verifies the answer as "9"

Scenario Outline: Verify user can perform multiple operation
Given User is on the calculator page
When User clicks on num1 "<number1>"
The user clicks on the "<operator>"
And User clicks on num2 "<number2>"
Then User verifies "<answer>"
Examples:
    | number1 |number2 | operator | answer |
    | 4       | 5      | +        | 9      |
    | 5       | 3      | -        | 2      |
    | 4       | 5      | *        | 20     |
    | 6       | 2      | /        | 3      |

In the above feature file, I have shown one simple scenario where I have performed a simple addition operation, and in the next scenario, I have created a scenario outline where different operations are performed, including addition, subtraction, multiplication, and division.

Step-definitions:

The “step-definitions” folder contains files or classes that define the behavior or actions associated with each step in the BDD scenario.

In WebDriverIO, you can generate step definitions for the given scenarios in a feature file using a tool called “cucumber-boilerplate”.

Following are the steps to generate steps in WebDriverIO using cucumber:

Install the “cucumber-boilerplate” package as a development dependency by running the following command in your project directory: “npm install cucumber-boilerplate –save-dev”

Once the installation is complete, you can generate the step definitions by running the following command: “npx cucumber-boilerplate generate”

This will prompt you to enter the path to the feature file for which you want to generate the steps.

Provide the path to the feature file (e.g., “./features/login.feature”) and press Enter.

The tool will generate the step definitions in JavaScript format, which you can then copy and paste into your WebDriverIO project’s step definition files.

const { Given, When, Then } = require('@wdio/cucumber-framework');
const addPage = require('../pageobjects/AddPage');
Given(/^User is on calculator page$/, async () => {
    await addPage.visitWeb()
  });

When(/^User taps on "(\d+)"$/, async (num) => {
    await addPage.tapNumber(num)
})

When(/^User taps on operator$/, async () => {
    await addPage.tapOperator()
}
Then(/^User verifies answer as "(\d+)"$/, async (ans) => {
    await addPage.getAns(ans)
})

When(/^User clicks on num1 "([^"]*)"$/, async (num1) => {
    await addPage.clickNum1(num1)
})

When(/^User clicks on num2 "([^"]*)"$/, async (num2) => {
    await addPage.clickNum2(num2)
})

When(/^User clicks on the "([^"]*)"$/, async(opt) =>{
   await addPage.clickOperator(opt)
})

Then(/^User verifies "([^"]*)"$/, async(ans) =>{
   await addPage.verifyAnswer(ans);
})

In the above code, you can see we have integrated steps for each line of the feature file, so we can run code in BDD format.

Page objects:

These are classes that represent a web page, containing methods and properties that interact with the page’s elements, such as buttons, links, and input fields.

const { config } = require("../../wdio.conf");
const assert = require('assert');
const addPageLoc = require("../../Locators/AddPageLocators")
const scr = require('../pageobjects/ScreenshotPage')
class AddPage{
    constructor(){
        this.plusOpt = addPageLoc.plusOpt;
        this.answer = addPageLoc.answer;
    }
    // Since we parameterized the value for the locator, we kept it as is.
    getNumber(num){
        return $('[onclick="r('+num+')"]')
    }
    async tapNumber(num){
        await this.getNumber(num).click();
        scr.takeScreenshot('tapping_number');
    }
    async tapOperator(){
        await this.plusOpt.click()
        await browser.pause(3000);
        scr.takeScreenshot('tapping_operator');
    }
    async getAns(){
        let txt = await this.answer.getText()
        console.log("Answer of addition: " +txt);
        scr.takeScreenshot('gettingTextOfElement');
    }

    async visitWeb(){
        await browser.url(config.baseUrl)
        scr.takeScreenshot('webUrl');
    }

    async clickNum1(num1){
        await this.getNumber(num1).click();
        scr.takeScreenshot('clicking_number1');
    }
    async clickNum2(num2){
        await this.getNumber(num2).click();
        scr.takeScreenshot('clicking_number2');
    }
    async clickOperator(opt){
        await $('[onclick="r(\''+opt+'\')"]').click();
        // await this.operator.replace('XXX', opt).click();
        scr.takeScreenshot('clicking_operator');
    }
    async verifyAnswer(ans){
        let result = await this.answer.getText()
        console.log("Retrieving text value from element: " +result)
        assert.equal(result,parseInt(ans));
        scr.takeScreenshot('verifyingResult');
    }
}
module.exports = new AddPage();

The browser.pause() method was used to pause it for the specified amount of time. It takes time in milliseconds.

Also, we added methods to the “AddPage” class, such as click() and setValue(), that are necessary to perform operations on web elements. Also, the setValue() method has been used for sending values for web elements.

Locators:
              This folder includes all the locators required to operate web elements

module.exports = {
    plusOpt: $('[onclick="r(\'+\')"]'),
    answer: $('[id="sciOutPut"]'),
    operator: $('[onclick="r(\'XXX\')"]'
}

In the above code, we listed out all the locators in one file and then imported them into pages, removing clumsiness from the code

Now that we have completed implementing the Page Object Model (POM) design pattern, we can consider incorporating additional functionalities to further enhance the framework’s suitability and reliability.

Screenshots:

To add screenshot functionality to your code, you need to incorporate the following code into your implementation:

class ScreenshotPage{
    takeScreenshot(filename) {
        const timestamp = new Date().getTime();
        const filepath = `./screenshots/${filename}_${timestamp}.png`;
        browser.saveScreenshot(filepath);
      }
}

module.exports = new ScreenshotPage();

Import this code into the page where you need to capture a screenshot by calling takeScreenshot(‘nameOfScreenshot’).

Screenshot
Calcuator.net webdriverio

The above image displays the screenshots it took. The sequence of screenshots offers an overview of the test case, illustrating actions taken at each step.

Cross-Browser Testing:

Cross-browser testing is a practice in software testing that involves testing a web application or website across multiple web browsers and browser versions to ensure its consistent functionality and appearance across different browser environments.

Capabilities:

In the wdio.conf.js file, make changes similar to what I have done in the ‘capabilities’ section. I have attached the following code for your reference. You can use it for assistance and make changes accordingly.

capabilities: [
    {
        browserName: 'chrome',
        acceptInsecureCerts: true
      },
      {
        browserName: 'firefox',
        acceptInsecureCerts: true
     }
],

Services:

In the ‘services’ section of the wdio.conf.js file, make changes similar to what I did in the following code snippets. You can make changes accordingly and run your test cases smoothly.

services: ['selenium-standalone'],
    seleniumInstallArgs: {
        drivers: {
          chrome: { version: 'latest' },
          firefox: { version: 'latest' },
          chromiumedge: { version: 'latest' },
        },
      }
      seleniumArgs: {
        drivers: 
          chrome: { version: 'latest' },
          firefox: { version: 'latest' },
          chromiumedge: { version: 'latest' },
        },
      },

The above code will assist you in implementing different browsers for testing, and you can also add others like Microsoft Edge, Safari, etc.

Allure_Report:

Allure Reports are often preferred over Cucumber Reports due to their visually appealing visualizations, comprehensive insights, step-by-step details, time tracking, integration capabilities, and historical trend analysis.

Once you have completed the automation process, testers need to generate reports to track the status of test cases, including pass or fail results and the exact location of failures. You can use the Allure report functionality for this purpose in your WebDriverIO project Follow these steps to include the Allure report:

1.     Install the Allure Reporter plugin for WebDriverIO using the following command: “npm install @wdio/allure-reporter –save-dev”

2.     Add the Allure Reporter plugin to your wdio.conf.js file as a reporter. Following is an example configuration:

exports.config = {
reporters: ['spec', ['allure', {
outputDir: './allure-results',
disableWebdriverStepsReporting: true,
disableWebdriverScreenshotsReporting: false,
}]],
}

In this example, we’re using the spec reporter for console output and the allure reporter for generating the allure report. The outputDir option specifies the directory where it will generate the report files.

1.     Add the Allure command line tool to your project by running the following command: “npm install allure-commandline –save-dev”

2.     After running your tests, generate the Allure report by running the following command: “npx allure generate allure-results –clean”

Allure Repost webdriverio

Project Folder Structure: 

As we have completed the design of the folder structure for the framework, you can now see below the image of what the framework’s folder structure looks like.

allure Report webdriverio

The image above shows the integration of different folders in the WebdriverIO framework. I have provided explanations for each folder and its contents.

To run test cases on a browser, you can use the following commands:

·       npx wdio wdio.conf.js

·       npx wdio run wdio.conf.js –spec features\Addition.feature // To run a specific feature file

·       npx wdio wdio.conf.js –spec ./path/to/your/test.js –browser chrome // To run on a specific browser”

Note: Please make sure to replace the path and file names with the appropriate ones for your specific setup

Conclusion:

WebdriverIO is a comprehensive and feature-rich framework that empowers developers and testers to create reliable and efficient automation tests for web applications. It is a vast ecosystem of plugins, extensive documentation, and also active community support make it a top choice for automation testing in the modern web development landscape. By adopting WebdriverIO, organizations can significantly improve their web application testing efforts and deliver high-quality software to their end-users.

Read more Blogs here.

How to execute Playwright-Js tests on BrowserStack

How to execute Playwright-Js tests on BrowserStack

        

Introduction:

At times, certain project specifications demand comprehensive testing of a web application across a diverse range of web browsers and their varying versions. Maintaining consistent functionality, visual appearance, and user engagement is essential.

To execute this type of cross-browser testing efficiently, there are two primary approaches:

  • One involves manual testing, where the website is examined across multiple browsers and devices. 
  • The other approach entails leveraging specialized tools and services that replicate distinct browser environments. 

Particularly, BrowserStack stands out as a tool that has garnered considerable attention in this field, thanks to its intuitive user interface.

Executing test cases on BrowserStack is crucial for QA teams to ensure the compatibility, functionality, and performance of websites and applications across a wide range of real browsers and devices. 

It helps identify bugs, responsive design issues, and user experience problems, while also supporting automated testing and real environment testing. 

BrowserStack eliminates the need for local testing infrastructure, reducing costs and enhancing overall testing efficiency.                                                   

So first let’s talk about the Basic Framework where we can automate test cases using the BDD approach to execute tests on a browser on the local machine. 

So for this instead of explaining it from scratch here again, You can explore my blog where I have explained How to create a BDD automation framework using Cucumber in JavaScript and Playwright.

This blog gives you step-by-step guidance on how to Create Feature files, Steps files, Page Files, and various other utilities to make the framework more reusable and robust. So here is the link to this blog: https://shorturl.at/bjuvU

You can also use the below link to access the BDD automation framework using Cucumber in JavaScript: https://github.com/spurqlabs/Playwright_JS_BDD_Framework

Pre-requisite:

To get started with BrowserStack implementation first store your BrowserStack username and access key in the .env file.
To get the username and access key to navigate to your BrowserStack account. Click on Access key dropdown from dashboard and you will be able to see your username and access key.

Here you will be able to see your account details. Copy your Username and Access Key and store it in a .env file as shown below. To execute tests on BrowserStack set the Browserstack variable to ‘true’.

Browserstack = true
BROWSERSTACK_USERNAME='browserstack_usename'
BROWSERSTACK_ACCESS_KEY='browserstack_accessKey'

Now to set BrowserStack configurations Create a file as “Browserstack.js”, as you can see I have created a “Browserstack.js file” in the Utility folder.

Playwright Framework


Here are the essential BrowserStack configurations to integrate into this file.

const cp = require('child_process')
const playwrightClientVersion = cp
    .execSync('npx playwright --version')
    .toString()
    .trim()
    .split(' ')[1]
function caps() {
    return {
        'browser': 'playwright-chromium',
        'os': 'Windows',
        'os_version': '10',
        'browserstack.username': process.env.BROWSERSTACK_USERNAME,
        'browserstack.accessKey': process.env.BROWSERSTACK_ACCESS_KEY,
        'client.playwrightVersion': playwrightClientVersion
    }
}
module.exports = { caps }
.execSync('npx playwright --version')
    .toString()
    .trim()
    .split(' ')[1]

.execSync:

The playwrightClientVersion variable is declared and assigned the output of the shell command executed using execSync from the child_process module.

.execSync('npx playwright --version’):

The command executed is npx playwright –version, which retrieves the version of the Playwright framework installed on the system. The output is converted to a string, trimmed to remove leading or trailing whitespace, and then split into an array based on spaces. The second element of the array (split(‘ ‘)[1]) represents the Playwright’s version.

caps():

This function is responsible for generating a configuration object used for browser testing with the BrowserStack platform and Playwright framework. It returns an object that contains configuration options for browser testing like browser, os, os_version, etc.

process.env.BROWSERSTACK_USERNAME, process.env.BROWSERSTACK_ACCESS_KEY:

These are the variables from the .env file. If you are not using a .env file you can directly pass the BrowserStack username and access key.

As we have all the configurations ready, we will set up a test environment to execute tests on BrowserStack. Use the file where you have your Before and After methods. In the playwright-Js Framework we have these methods stored in the ‘TestHooks.js file’ so will update that file as below.

const { Before, AfterAll } = require('@cucumber/cucumber')
const page = require('@playwright/test')
const path = require('path');
const { chromium } = require('playwright')
require('dotenv').config({
  path: path.join(__dirname, '../.env'),
});
Before(async (scenario) => {
  if (process.env.BrowserStack === 'true') {
    try {
      const capabilitiesBr = require('./BrowserStack').caps(scenario)
      global.browser = await chromium.connect({
        wsEndpoint: `wss://cdp.browserstack.com/playwright?caps=${encodeURIComponent(JSON.stringify(capabilitiesBr))}`,
      })
      viewport = {
        width: 1920,
        height: 900,
      }
    } catch (e) 
      console.error('Error connecting to BrowserStack', e)
    }
  } else {
    global.browser = await page.chromium.launch({ headless: false })
  }
  const context = await browser.newContext()
  global.page = await context.newPage()
})
AfterAll(async () => {
  await global.browser.close()
})

The above code Generates capabilities for BrowserStack using the caps() function from the ‘BrowserStack’ module. Construct the WebSocket endpoint URL for BrowserStack with the generated capabilities.

Now we have completed the BrowserStack implementation in our framework, now you can execute your tests on BrowserStack.

To execute tests on BrowserStack make sure you have “Browserstack = true” in the .env file.

Then you can execute your tests using the command ’npm run test’ on your terminal.

After running the test cases on BrowserStack, you’ll find the report below. Access comprehensive execution videos, and test logs for both successful and unsuccessful cases, along with execution duration.

Conclusion:

Integrating BrowserStack with Playwright JavaScript tests, gives access to a wide range of real browsers and devices, enabling comprehensive cross-browser and cross-platform testing. The scalability and cost-effectiveness of BrowserStack eliminate the need for maintaining an extensive device lab. Ultimately, this integration enhances test coverage, improves software quality, and ensures a seamless user experience across various browsers and platforms.

Read more blogs here