Desktop Automation Made Easy: A Winium, Java, and BDD Guide

Desktop Automation Made Easy: A Winium, Java, and BDD Guide

Desktop application test automation can be a tedious task as it’s hard to locate the elements and interact with those elements. There are plenty of tools available for automating desktop applications. Winium is one of those tools which is a selenium-based tool. So for those who don’t have an idea about Selenium, Selenium is a web application test automation tool that supports almost all programming languages. (Wish to learn more about selenium? Check out the link here) If you are familiar with the Selenium tool then it’s going to be easy for you to understand the workings of the Winium tool as most of the methods are common and if you are not familiar with Selenium no worries, I have got you covered. 

Coming back to our topic, In this blog we will see how we can create a robust test automation framework for automating desktop applications using Winium a desktop application automation tool, Java as a programming language, Maven, as a dependency management tool, Cucumber as a BDD (Behavior Driven Development) tool. We are going to build a test automation framework from scratch. Even if you don’t have any idea on how to create a framework no worries. 

Before we start building the framework let’s complete the environment set-up. So for this, we will have to install some tools. Below I am sharing the URLs of the tools we are using just in case if you want to know more about these tools then you can visit these official web pages. 

  • As we are using Java programming it is a must to have JDK installed on the system. 
  • You can download the JDK (Make sure the version is greater than 8) For instance I have the Java 11.0.16.1 version set up on my system. 
  • Use this link to download the Java SE Development Kit – https://www.oracle.com/in/java/technologies/javase/jdk11-archive-downloads.html
  • Once the download is completed the next step is setting up the path in the environment variables. Check the below screenshots to set up the path in your system environment variables
  • Once you are done with the above steps then you should see the below information in the command prompt.
  • Once you are done with Java Installation and set up the next step is to do the installation and set up the maven. 
  • To download the maven you can visit this official web page – https://maven.apache.org/download.cgi
  • Again after installation, we have to set up the environment variable path for our system. 
  • See the below screenshots for your reference.
  • We need an inspector to inspect the desktop application elements. We use different approaches and tools to inspect and locate the web page elements. 
  • Here we will use the Inspect.exe tool to inspect and locate the desktop application elements. 
  • Use this link to download and install the tool – https://github.com/blackrosezy/gui-inspect-tool/blob/master/Inspect.exe
  • Not only that there are other desktop application element inspection tools. 
  • Once you are done with the above steps then we can start building the automation framework. 

The BDD (Behavior-Driven-Development) is a software development approach that focuses on collaboration among stakeholders, including developers, QA engineers, and business analysts. The reason behind this is that in the BDD approach, we use natural language specifications to describe software behaviour from the end user’s perspective. I believe this helps in creating a shared understanding of requirements and promotes effective communication throughout the development lifecycle. Let’s see this in detail, 

  • Feature files are the main component of the BDD cucumber framework we can even say they are the heart of this cucumber framework. 
  • These files are written using gherkin language which describes the high-level functionalities of the application. 
  • Cucumber is a widely used BDD tool as it allows us to write test cases (scenarios) in plain tests using the Gherkin syntax. 
  • This is because Gherkin uses keywords like, Given, When, And, and Then to structure scenarios, making it easy to read and understand by both technical and non-technical stakeholders. 
  • Here is the one scenario that I have created for this framework. 
  • Yes, that’s correct. Step definition files contain code that maps the steps in the feature file to automation code. 
  • These files are written using the programming language used in the automation framework, in this case, Java.
  • The step definitions are responsible for interacting with the elements of the application and performing actions on them such as clicking, entering text, etc. 
  • They also contain assertions to check if the expected behaviour is observed in the application.
  • In Cucumber, hooks are methods annotated with @Before and @After that run before and after each scenario. 
  • To ensure consistency between test environments, these hooks are used for setting up and taking down tests. 
  • The application can be initialized before and cleaned up after each scenario using hooks, for example.

The Page Object Model (POM) is a design pattern that assists in building automation frameworks that are scalable and maintainable. In POM, we create individual page classes for each application page or component, which encapsulates the interactions and elements associated with that particular page. This approach improves code readability, reduces code duplication, and enhances test maintenance.

In a test automation framework, utility files provide reusable functionalities, configurations, and helper methods to streamline the development, execution, and maintenance of test scripts. As a result, they enhance the efficiency, scalability, and maintainability of the automation framework. Listed below are a few common utility files, along with their functions:

  • This utility file handles the launch and termination processes of the desktop application, as well as the Winium driver 
  • When we use Winium as a desktop application automation tool we have to start the server. (Winium Driver). 
  • Either we can do this manually before starting the execution of the test case or we can do this through automation as well. 
  • In the below utility file there are methods created for launching the desktop application and Winium driver (server). 
  • This common util file reads or retrieves the values and files present in a particular folder (referenced here as the resource folder).
  • This file can further serve as a basis for developing additional common methods usable throughout the framework.
  • The TestRunner class executes Cucumber tests with specified configuration settings, including the location of feature files, step definitions package, inclusion tags, and report generation plugins.
  • The seamless integration of Cucumber tests into TestNG makes testing and reporting easy.

Once we have defined the test scenarios, we will use Maven commands to execute them. Maven is a robust tool that manages project dependencies and automates the build process. With Maven, we can run automated tests with ease and ensure a smooth and efficient testing process.

  • In the project’s Maven Project Object Model (POM) file, we define the necessary configurations for test execution. 
  • This includes specifying the test runner class, defining the location of feature files and step definitions, setting up plugins for generating test reports, and configuring any additional dependencies required for testing.

Once you configure the automated tests in the Maven POM file, you can run them using Maven commands from the terminal or command prompt. Common Maven commands used for test execution include:

  • mvn test – This command runs all the tests from the project.
  • mvn clean test – This command first cleans the project (removes the target directory) and then runs the tests.
  • mvn test “-Dcucumber.filter.tags=@tagName” – This command runs tests with specific Cucumber tags.

Cucumber provides built-in support for generating comprehensive test reports. By configuring plugins in our automation framework, we can generate detailed reports that showcase the test results, including passed, failed, and pending scenarios. These reports offer valuable insights into the test execution, helping us identify issues, track progress, and make data-driven decisions for test improvements.

Automating desktop applications with Winium, Java, and Behavior-Driven Development (BDD) using Cucumber is a strategic approach that offers numerous benefits to software development and testing teams. By combining these technologies and methodologies, we create a robust automation framework that enhances software quality, reduces manual efforts, and promotes collaboration across teams.

In conclusion, automating desktop applications with Winium, Java, and BDD using Cucumber empowers teams to deliver high-quality software efficiently. By leveraging the strengths of each technology and following best practices such as the Page Object Model and Maven integration, we create a solid foundation for successful test automation that aligns with business goals and enhances overall product quality.

You can access the complete source code of the created automation framework for desktop applications using Winium, Java, and BDD with Cucumber on GitHub at https://github.com/spurqlabs/Desktop-App-Winium-Java-Cucumber The framework includes feature files, step definitions, page classes following the Page Object Model, Maven dependencies, and configuration files for generating Cucumber reports. Feel free to explore, fork, and contribute to enhance the framework further.

Read more blog here

How to trigger a workflow from another workflow using GitHub Action

How to trigger a workflow from another workflow using GitHub Action

GitHub Actions has revolutionized the way developers and testers automate their workflows. With Actions, developers can easily define and customize their CI/CD processes, enhancing productivity and code quality. One of the powerful features of GitHub Actions is the ability to trigger workflows from another workflow GitHub Actions. In this article, we will delve into the intricacies of mastering GitHub Actions and explore how to trigger workflows from other workflows.

GitHub Actions is a powerful automation framework integrated into GitHub. It allows developers and testers to define custom workflows composed of one or more jobs, each consisting of various steps. These workflows can be triggered based on events such as push and pull requests, commits, or scheduled actions. The benefits of using GitHub Actions include faster development cycles, improved collaboration, and streamlined release processes.

Before we delve into triggering workflows, let’s define what a workflow is in GitHub Actions. A workflow is a configurable automated process that runs on GitHub repositories. It consists of one or more jobs, each defining a set of steps. These steps can perform tasks such as building, testing, and deploying code. 

It is important to understand workflow dependencies to trigger a workflow from another workflow. Workflow dependencies refer to the relationships between different workflows, where one workflow triggers the execution of another workflow. By leveraging workflow dependencies, developers and testers can create a seamless and interconnected automation pipeline. 

In complex development scenarios, there is often a need to trigger workflows based on the completion of other workflows. This can be particularly useful when different parts of the development process depend on each other and when different teams collaborate on a project. By triggering workflows from related workflows, developers and testers can automate the execution of dependent tasks, ensuring a smoother development workflow. 

The advantages of workflow interdependency are numerous. Firstly, it allows for a modular and reusable approach to workflow automation. Instead of duplicating steps across different workflows, developers, and testers can encapsulate common operations in one workflow and trigger it from others. This promotes code reusability, reduces maintenance efforts, and enhances overall development efficiency. Moreover, workflow interdependency enables better collaboration between teams working on different aspects of a project, ensuring a seamless integration between their workflows. 

  • A GitHub repository having a workflow defined in it (repository_01)
  • Another GitHub repository (repository_02) has a workflow defined in it that triggers after repository_01 workflow completion.
  • GitHub personal access token 

As we have all the required stuff for our goal then let’s get it done. First will understand about GitHub personal access token.

Personal access tokens are an alternative to using passwords to authenticate GitHub when using the GitHub API or the command line. Personal access tokens are intended to access GitHub resources on your behalf.

To learn more about GitHub personal access tokens visit the official website of GitHub https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens

1: First, Access your GitHub account by logging in.

2: Navigate to your profile, click on “Settings,” and proceed to “Developers.”

3: Click on Personal Access Token and then Select Token Classic.

4: Navigate to and choose “Generate new token,” then select Generate new token Classic.

5: Here, we Include a note for your Access Token (PAT) – it’s optional. Choose the expiration date for your PAT. Select the scope and at last click on generate token. Copy the token and paste it on a notepad. 

(Remember the selected scope will decide the permissions and authorization to access another repository and workflow)

So now we need to add the generated PAT to our repository_01 as a secret to do this follow the below steps.

  • To navigate to your repository, you can click on the settings.
  • Then go to secrets and variables then select the Action button.

Select the repository secret, add PAT_TOKEN in the name, and paste the copied personal access token in the value. Click on Add Secret.

To create a workflow head over to the action tab and click on new workflow. Then select Set up workflow yourself. Now customize your workflow and add the below step to trigger the (repository_02) workflow.

Let’s understand the trigger-workflow02 stage. Following is the secret we have added is used here to provide the permissions and the authorization to understand and trigger the workflow_02 of repository_02 also replace the username with your GitHub username and repository_02 name with your other repository name. 

As our first workflow is ready now let’s create our second workflow for repository_02. Follow the same steps described in the above step for the creation of a workflow. 

Now let’s understand what to consider here, first the triggering event is set as repository_dispatch means when the other repository is completed this workflow will get triggered and now to specify which repository we arousing types as trigger-workflow02 which is defined as a stage in the workflow01

We are done this is how we can trigger the workflow02 of repository_02 when the execution of workflow01 of repository_01 is completed and the status is passed. Below are the output screenshots give it a check.

Till this point whatever we have seen it’s for our personal GitHub account and if we want to implement this concept for the organization’s GitHub account then we need to introduce a small change in the workflow01 of the repository_01.

Let’s understand the trigger-workflow02 stage. The secret we have added is used here to provide the permissions and authorization to trigger the workflow_02 of repository_02 also replace the organization with your organization’s GitHub name and repository_02 name with your other repository name. 

In this blog, we have explored the powerful feature of trigger workflow from another workflow using GitHub Actions. By understanding workflow dependencies, leveraging workflow events and triggers, implementing remote triggers, and building scalable workflow chains, developers can enhance their CI/CD processes and workflow automation. To summarize, triggering workflows from another workflow allows for increased reusability, collaboration, and customization of automation processes. By embracing these features, developers can optimize their development workflows and empower their teams to achieve greater productivity and efficiency.

How to Setup CI/CD Pipeline for automated API Tests

How to Setup CI/CD Pipeline for automated API Tests

Automating API test suite execution through CI/CD pipelines provides a significant advantage over local execution. By leveraging CI/CD, teams can obtain test results for all systems, improving the speed, quality, and reliability of tests. Manual triggering of API suite execution is not required, freeing up valuable time for team members.

In this blog post, we will guide you through the creation of a workflow file using GitHub Actions for your automated API tests. However, before diving into the creation of a CI/CD workflow, it’s essential to understand some crucial points for a better grasp of the concept.

Before we start creating a CI/CD workflow for our API tests I will suggest you first go through the API test automation framework here and also read this blog on creating a web test automation framework as it helps you to understand the different points which we all should consider before selecting the test automation framework. The API test automation framework is in Python language and has Behave library for BDD purposes.

Let’s understand some basic and important points to start with the CI/CD workflow.

What is DevOps?

DevOps is a set of practices and tools that integrate and automate tasks in the software development and IT industry. It establishes communication and collaboration between development and operations teams, enabling faster and more reliable software build, testing, and release processes. DevOps is a methodology that derives its name from the combination of “Development” and “Operations.”

The primary goal of DevOps is to bridge the gap between development and operations teams by fostering a culture of shared responsibility and collaboration. This helps to reduce the time it takes to develop, test, and deploy software while maintaining high quality and reliability standards. By automating manual processes and eliminating silos between teams, DevOps enables organizations to respond more quickly to changing market demands and customer needs.

To know more about DevOps and its history, please visit the site https://en.wikipedia.org/wiki/DevOps 

CI/CD-1

What is CI/CD?

CI/CD refers to Continuous Integration and Continuous Delivery, which are processes and practices that help to deliver code changes more frequently and reliably. These processes involve automating the building, testing, and deployment of code changes, resulting in faster and higher-quality software releases for end-users.

The CI/CD pipeline follows a workflow that starts with continuous integration (CI), followed by continuous delivery (CD). The CI process involves integrating code changes into a shared repository and automatically building and testing them to identify errors early in the development process. Once the code has been tested and approved, the CD process takes over and automates the delivery of code changes to production environments.

The CI/CD pipeline workflow helps to reduce the risks and delays associated with manual code integration and deployment while ensuring that the changes are tested and delivered quickly and reliably. This approach enables organizations to innovate faster, respond more quickly to market demands, and improve overall software quality.

Process:

CI/CD-2

What are GitHub Actions?

GitHub Actions is a feature that makes it easy to automate software workflows, including world-class CI/CD capabilities. With GitHub Actions, you can build, test, and deploy your code directly from GitHub, while also customizing code reviews, branch management, and issue-triaging workflows to suit your needs.

To learn more about GitHub Actions, please refer to the official documentation available here
https://docs.github.com/en/actions

The GitHub platform offers integration with GitHub Actions, providing flexibility for customizing workflows to automate tasks such as building, testing, and deploying code. Developers can create custom workflows using GitHub Actions that are automatically triggered when specific events occur, such as code push, pull request merge, or as per a defined schedule.

Workflows are defined using YAML syntax, which is a human-readable data serialization language. YAML is commonly used for configuration files and in applications to store or transmit data. To learn more about YAML syntax and its history, please visit the following link

Advantages / Benefits of using GitHub Actions for CI/CD Pipeline:

  • Seamless integration: GitHub Actions seamlessly integrates with GitHub repositories, making it easy to automate workflows and tasks directly from the repository.
  • Highly customizable: GitHub Actions offers a high degree of customization, allowing developers to create workflows that suit their specific needs.
  • Time-saving: GitHub Actions automates many tasks in the software development process, saving developers time and reducing the potential for errors.
  • Flexible: GitHub Actions can be used for a wide range of tasks, including building, testing, and deploying applications.
  • Workflow visualization: GitHub Actions provides a graphical representation of workflows, making it easy for developers to visualize and understand the process.
  • Large community: GitHub Actions has a large and active community, providing a wealth of resources, documentation, and support for developers.
  • Cost Saving: GitHub Actions come bundled with Github free and enterprise licenses reducing the cost of maintaining separate CI/CD tools like Jenkins

Framework Overview:

This is a BDD API automation testing framework. The reason behind choosing the BDD framework is simple it provides you the following benefits over other testing frameworks. 

  • Improved Collaboration
  • Increased Test coverage
  • Better Test Readability
  • Easy Test Maintenance
  • Faster Feedback
  • Integration with Other Tools
  • Focus on Business Requirements

Discover what are the different types of automation testing frameworks available and why to prefer the BDD framework over others here

Framework Explanation:

The framework is simple because we included a feature file written in the Gherkin language, as you will notice. Basically, Gherkin is a simple plain text language with a simple structure. The feature file is easy to understand for a non-technical person and that is why we prefer the BDD framework for automation. To learn more about the Gherkin language please visit the official site here https://cucumber.io/docs/gherkin/reference/. Also, we have included the POST, GET, PUT & DELETE API methods. A feature file describes all these methods using simple and understandable language.

The next component of our framework is the step file. The feature and step files are the two main and most essential parts of the BDD framework. The step file contains the implementation of the steps mentioned in the feature file. It maps the respective steps from the feature file and executes the code.We use the behave library to achieve this. The behave understands the maps of the steps with the feature file steps as both steps have the same language structure. 

Then there is the utility file which contains the methods which we can use more repeatedly. There is one configuration file where we store the commonly used data. Furthermore, to install all the dependencies, we have created a requirement.txt file which contains the packages with specific versions. To install the packages from the requirement.txt file we have the following command. 

pip install -r requirement.txt

The above framework is explained in detail here. I suggest you please check out the blog first and understand the framework then we can move further with the workflow detail description. A proper understanding of the framework is essential to understand how to create the CI/CD workflow file.  

How to create a Workflow File?

  • Create a GitHub repository for your framework
  • Push your framework to that repository
  • Click on the Action Button
  • Click on set workflow your self option
  • Give a proper name to the workflow file

“Additionally, please check out the below video for a detailed step understanding.” The video will show you how to create workflow files and the steps need to follow to do so. 

github actions workflow file creation

Components of CI/CD Workflow File:

Events:

Events are responsible to trigger the CI/CD workflow file. They are nothing but the actions that happen in the repository for example pushing to the branch or creating a pull request. Please check the below sample events that trigger the CI/CD workflow file. 

  • push: This event is triggered when someone pushes code to a branch in your repository.
  • pull_request: This event is triggered when someone opens a new pull request or updates an existing one.
  • schedule: This event is triggered on a schedule that you define in your workflow configuration file.
  • workflow_dispatch: This event allows you to manually trigger a workflow by clicking a button in the GitHub UI.
  • release: This event is triggered when a new release is created in your repository.
  • repository_dispatch: This event allows you to trigger a workflow using a custom webhook event.
  • page_build: This event is triggered when GitHub Pages are built or rebuilt.
  • issue_comment: This event is triggered when someone comments on an issue in your repository.
  • pull_request_review: This event is triggered when someone reviews a pull request in your repository.
  • push_tag: This event is triggered when someone pushes a tag to your repository.

To know more about the events that trigger workflows please check out the GitHub official documentation here

Jobs:

After setting up the events to trigger the workflow the next step is to set up the job for the workflow. The job consists of a set of steps that performs specific tasks. For every job, there is a separate runner or we can call it a virtual machine (VM) therefore each job can run parallelly. This allows us to execute multiple tasks concurrently. 

A workflow can have more than one job with a unique name and set of steps that define the actions to perform. For example, we can use a job in the workflow file to build the project, test its functionality, and deploy it to a server. The defined jobs in the workflow file can be dependent on each other. Also, they can have different requirements than the others like specific operating systems, software dependencies or packages, or environment variables. 

Discover more about using jobs in a workflow from GitHub’s official documentation here

Runners:

To execute the jobs we need runners. The runners in GitHub actions are nothing but virtual machines or physical servers. GitHub categorizes them into two parts named self-hosted or provided by GitHub. Moreover, the runners are responsible for running the steps described in the job.

The self-hosed runners allow us to execute the jobs on our own system or infrastructure for example our own physical servers, virtual machines, or containers. We use self-hosted runners when we need to run jobs on specialized hardware requirements that must be met.

GitHub-hosted runners are provided by GitHub itself and can be used for free by anyone. These runners are available in a variety of configurations. Furthermore, the best thing about GitHub-hosted runners is that they automatically update with the latest software updates and security patches.

Learn more about runners for GitHub actions workflow here from GitHub’s official documentation. 

Steps:

Steps in the workflow file are used to carry out particular actions. Subsequently, after adding the runner to the workflow file, we define these steps with the help of the steps property in the workflow file. Additionally, the steps consist of actions and commands to perform on the build. For example, there are steps to download the dependencies, check out the build, run the test, upload the artifacts, etc. 

Learn more about the steps used in the workflow file from GitHub’s official documentation here

Actions:

In the GitHub actions workflow file, we use actions that are reusable code modules that can be shared across different workflows and repositories. One or more steps are defined under actions to perform specific tasks such as running tests, building the project, or deploying the code. We can also define the input and output parameters to the actions which help us to receive and return the data from other steps in the workflow. Developers describe the actions, and they are available on GitHub Marketplace. To use an action in the workflow, we need to use the uses property.

Find out more about actions for GitHub actions from GitHub’s official documentation here 

Now we have covered all the basic topics that we need to understand before creating our CI/CD workflow file for the API automation framework. Now, let’s start explaining the workflow file.

CI/CD Workflow File:

name: Python API CI/CD Pipeline
on:
  push:
   branches: ["main"]
#    schedule:
#       - cron: '00 12 * * *'
jobs:
 build:
  runs-on: windows-latest
  steps:
    - uses: actions/checkout@v3
    - name: Set up Python
      uses: actions/setup-python@v3
      with:
        python-version: '3.8.9'
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip && pip install -r requirement.txt
    - name: install allure
      run:  npm install -g allure-commandline
      continue-on-error: true
    - name: run test
      run: behave Features -f allure_behave.formatter:AllureFormatter -o Report_Json
      working-directory: .
      continue-on-error: true
    - name: html report
      run: allure generate Report_Json -o Report_Html --clean
      continue-on-error: true
    - uses: actions/upload-artifact@v2
      with:
          name: HTML reports
          path: Report_Html
      continue-on-error: true

Explanation:

Name:

  • We use the name property to give the name to the workflow file. It is a good practice to give a proper name to your workflow file. Generally, the name is related to the feature or the repository name. 
name: Python API CI/CD Pipeline

Event:

Now we have to set up the event that triggers the workflow file. In this workflow, I have added two events for your reference. The pipeline will trigger the push event for the ‘main‘ branch. Additionally, I added the scheduled event to automatically trigger the workflow as per the set schedule.

on:
   push:
    branches: ["main"]
#    schedule:
#       - cron: '00 12 * * *'

The above schedule indicates that the pipeline Runs at 12:00. Action schedules run at most every 5 minutes using UTC time.

We can customize the schedule timing as per our needs. Check out the following chron specification diagram to learn how to set the schedule timing.

Job:

The job we are setting here is to build. We want to build the project and perform the required tasks as we merge new code changes.

jobs:
  build:

Runner:

The runner we are using here is a GitHub-hosted runner. In this workflow, we are using a Windows-latest virtual machine. The VM will build the project, and then it will execute the defined steps.

runs-on: windows-latest

Apart from Windows-latest, there are other runners too like ubuntu-latest, macos-latest, and self-hosted. The self-hosted runner is one that we can set up on our own infrastructure, such as our own server, or virtual machine, allowing us to have more control over the environment and resources.

Steps:

The steps are the description of what are the different actions required to perform on the project build. Here, the first action we are performing is to check out the repository so that it can have the latest build code. 

steps:
- uses: actions/checkout@v3

Then we are setting up the Python. As this framework is an API automation testing framework using Python and Behave so we need Python to execute the tests. 

- name: Set up Python
  uses: actions/setup-python@v3
      with:
         python-version: '3.8.9'

After we install Python, we also need to install the different packages required to run the API tests. Define these packages in the requirement.txt file, and we can install them using the following command.

- name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip && pip install -r requirement.txt

For reporting purposes, we are using allure reports. To generate the allure report we need to install the allure package separately.

- name: install allure
  run:  npm install -g allure-commandline
  continue-on-error: true

As of now, we have installed all the packages and we can now run our API tests. We are running these tests with the help of the allure behave command so that once the execution is completed it will generate a Report_Json folder which is required to generate the HTML report. 

- name: run test
 run: behave Features -f allure_behave.formatter:AllureFormatter -o Report_Json
 working-directory: .
 continue-on-error: true

Here, we cannot share the generated Report_Json folder as a report. To generate the shareable report we need to convert the JSON folder to that of the HTML report. 

- name: html report
run: allure generate Report_Json -o Report_Html --clean
continue-on-error: true

To view the report locally we need to upload the artifacts first and then only we can download the generated HTML result. 

- uses: actions/upload-artifact@v2
     with:
          name: HTML reports
          path: Report_Html
          continue-on-error: true

How to download and view the HTML Report?

Please find the attached GitHub repository link. I have uploaded the same project to this repository and also attached a Readme file that explains the framework and the different commands we have used so far in this project. Also, the workflow explanation is included for better understanding.

Conclusion:

In conclusion, creating a CI/CD pipeline workflow for your project using GitHub Actions streamlines the development and testing process by automating tasks such as building the project for new changes, testing the build, and deploying the code. This results in reduced time and minimized errors, ensuring that your software performance is at its best.

GitHub Actions provides a wide range of pre-built actions and the ability to create custom actions that suit your requirements. By following established practices and continuously iterating on workflows, you can ensure your software delivery is optimized and reliable.

I hope in this blog I have provided the answers to the most commonly asked question and I hope this will help you to start creating your CI/CD pipelines for your projects. Do check out the blogs on how to create a BDD framework for Web Automation and API automation for a better understanding of automation frameworks and how a robust framework can be created. 

API BDD Test automation framework using Behave and Python

API BDD Test automation framework using Behave and Python

API’s the term we heard a lot and wanted to know more about it. The questions that come to our mind are what is it? Why is it so important? How to test it? So, let’s just explore these questions one by one. API testing is accessible only if you know what to test and how to test. Again, a proper framework will help you to achieve your goals and deliver a good quality of work. The importance of automation framework and the factors we should consider for choosing the proper framework are described in our previous blog. Please go through the blog here, then you can start reading this blog because you will have a good understanding of automation testing frameworks. 

To build the API testing framework we will be using the BDD approach. Again, why I have chosen a BDD framework for API testing the reason is very simple the BDD approach provides you with an easy understanding of the framework, you can easily maintain the framework and they have feature files that are very easy to understand for a non-technical person. 

What is API?

API (Application Programming Interface) is like a mechanism that works between the two software components and helps them to communicate with each other. The communication happened using sets of definitions and set protocols. In simple language, API works as an intermediate between two systems and helps them exchange data and communicate. The working mechanism of Rest API is straightforward they work by sending requests and receiving a response in a standardized format. Usually, the standardized format used for Rest API is JSON i.e. (JavaScript Object Notation) 

Let’s understand it better with an example. Consider you are using a ticket booking app to book a flight ticket. As the app is connected to the internet so it will set data to the server. When the server receives the data it interprets it and performs the required actions and sends it back to your device. Then the application translates that data and display the information in a readable way. So this is how API works. I hope you have understood the working mechanism of API’s now let’s discuss the next topic i.e. 

What is API Testing?

As we have understood what is an API and how they work so let’s see why their testing is important. Basically, API testing is a part of software testing that includes the testing of the functionality, reliability, security, and performance of API. API is used for data transfer and to establish communication between the two systems so testing APIs includes verifying that the APIs are meeting its requirement, performing as per the expectations, and can handle a variety of inputs. This testing provides you the information that the API’s functionality is correct and efficient and the data they return is accurate and consistent. 

Why is API Testing Important?

API testing is an important part of a Software testing process as it helps you to understand the functionality of the working APIs and validate any defect present before the application is released to the end users. The other key reasons why API testing is important to include: 

  • Ensuring Functionality
  • Validating data integrity
  • Enhancing the Security
  • Improving the Performance
  • Detecting Bugs and Issues
  • Improving readability and stability
  • Facilitating integration and collaboration 

All the above-mentioned points get checked and validate in API testing. Till now we have discussed what is api, what is api testing, and why it is important. Let’s see what different tools are available to conduct the manual as well as automation testing of API. 

Tools for Manual API Testing:

  1. Postman
  2. SoapUI
  3. Insomnia
  4. Paw
  5. Advanced REST Client (ARC)
  6. Fiddler
  7. cURL

Tools for API Automation Testing:

  1. Postman
  2. SoapUI
  3. RestAssured
  4. RestSharp
  5. Apache HTTP client
  6. JMeter
  7. Karate
  8. Newman
  9. Pact.js
  10. Cypress.js

These are just a few examples of the tools available for both manuals as well as automation testing of API. Each mentioned tool has its own strength and weakness and the choice of the right tool for your API testing depends upon the requirement and the specific needs of the project. These tools will help us to ensure that the APIs meet the desired functionality and performance requirements. 

Now we are more familiar with APIs so let’s start the main topic of our discussion and i.e. Python Behave API Testing BDD Framework. 

Framework Overview:

To validate all the above-mentioned points creating a robust API testing framework is very essential. With the help of the below-mentioned steps, you will come to know how to create your own API testing framework. Here, we are going to create a BDD framework. Please go through this blog before starting to read this blog as the previous blog will help you to understand the advantages of BDD and this blog is linked to the previous blog topics. You can read the previous blog here

This framework structure contains a feature file, a step file, and a utility file. We will be discussing all these terms shortly. To create such a framework you need to follow certain steps to make your work tedious-free and easy. 

Step1: Prerequisites

  1. Python: https://www.python.org/downloads/ visit the site to download and install python in your system if it is not there.
  2. Pycharm IDE (Professional or Community): https://www.jetbrains.com/pycharm/download/ 
  3. Install all the required packages using the below command as long as you have all the packages mentioned in rquirement.txt with the right version number

pip install -r requirement.txt

  1. To know more about behave, allure report please visit https://pypi.org/project/behave/ & https://pypi.org/project/allure-behave/
  2. We can also install the mentioned packages from the settings of Pycharm IDE 

Step2: Creating Project

After understanding the prerequisites the next step is to create a project in our IDE. Here I am using a Pycharm Professional IDE. As mentioned in the above step, we will install the packages mentioned in the requirement.txt file. Please note it is not compulsory to use Pycharm Professional IDE to create this framework you can use the community version too. 

Step3: Creating a Feature File

In this, we will be creating a feature file. A feature file consists of steps. These steps are mentioned in the gherkin language. The feature is easy to understand and can be written in the English language so that a non-technical person can understand the flow of the test scenario. In this framework we will be automating the four basic API request methods i.e. POST, PUT, GET and DELETE.  We are taking https://reqres.in/

We can assign tags to our scenarios mentioned in the feature file to run particular test scenarios based on the requirement. The key point you must notice here is the feature file should end with .feature extension. We will be creating four different scenarios for the four different API methods. 

Feature: User API
Verify the GET PUT POST DELETE methods of User API
  @api
  Scenario: Verify GET call for single user
    When User sends "GET" call to endpoint "api/users/2"
    Then User verifies the status code is "200"
    And User verifies GET response contains following information
      | First_name | Last_name | Mail-id                |
      | Janet      | Weaver    | janet.weaver@reqres.in |

  @api
  Scenario: Verify POST call for single user
    When User sends "POST" call to endpoint "api/users"
      | Name   | Job  |
      | Yogesh | SDET |
    Then User verifies the status code is "201"
    And User verifies POST response body contains following information
      | Name   | Job  |
      | Yogesh | SDET |

  @api
  Scenario: Verify PUT call for single user
    When User sends "PUT" call to endpoint "api/users/2"
      | Name   | Job  |
      | Yogesh | SDET |
    Then User verifies the status code is "200"
    And User verifies PUT response body contains following information
      | Name   | Job  |
      | Yogesh | SDET |

  @api
  Scenario: Verify DELETE call for single user
    When User sends DELETE call to the endpoint "api/users/2"
    Then User verifies the status code is "200"

Step4: Creating a Step File

Unlike the automation framework which we have built in the previous blog, we will be creating a single-step file for all the feature files. In the BDD framework, the step files are used to map and implement the steps described in the feature file. Python’s behave library is very accurate to map the steps with the steps described in the feature file. We will be describing the same steps in the step file as they have described in the feature file so that behave will come to know the step implementation for the particular steps present in the feature file. 

from behave import *
from Utility.API_Utility import API_Utility
api_util = API_Utility()

@when('User sends "{method}" call to endpoint "{endpoint}"')
def step_impl(context, method, endpoint):
    global response
    response = api_util.Method_Call(context.table, method, endpoint)
@then('User verifies the status code is "{status_code}"')
def step_impl(context, status_code):
    actual_status_code = response.status_code
    assert actual_status_code == int(status_code)

@step("User verifies GET response contains following information")
def step_impl(context):
    api_util.Verify_GET(context.table)
    response_body = response.json()
    assert response_body['data']['first_name'] == context.table[0][0]
    assert response_body['data']['last_name'] == context.table[0][1]
    assert response_body['data']['email'] == context.table[0][2]

@step("User verifies POST response body contains following information")
def step_impl(context):
    api_util.Verify_POST(context.table)
    response_body = response.json()
    assert response_body['name'] == context.table[0][0]
    assert response_body['job'] == context.table[0][1]

@step("User verifies PUT response body contains following information")
def step_impl(context):
    api_util.Verify_PUT(context.table)
    response_body = response.json()
    assert response_body['Name'] == context.table[0][0]
    assert response_body['Job'] == context.table[0][1]

@when('User sends DELETE call to the endpoint "{endpoint}"')
def step_impl(context, endpoint):
    api_util.Delete_Call(endpoint)

Step5: Creating Utility File

Till now we have successfully created a feature file and a step file now in this step we will be creating a utility file. Generally, in Web automation, we have page files that contain the locators and the actions to perform on the web elements but in this framework, we will be creating a single utility file just like the step file. The utility file contains the API methods and the endpoints to perform the specific action like, POST, PUT, GET, or DELETE. The request body i.e. payload and the response body will be captured using the methods present in the utility file. So the reason these methods are created in the utility file is that we can use them multiple times and don’t have to create the same method over and over again. 

import json
import requests
class API_Utility:
    data = json.load(open("Resources/config.json"))
    api_url = data["APIURL"]
    global response

    def Method_Call(self, table, method, endpoint):
        if method == 'GET':
            uri = self.api_url + endpoint
            response = requests.request("GET", uri)
            return response

        if method == 'POST':
            uri = self.api_url + endpoint
            payload = {
                "name": table[0][0],
                "job": table[0][1]
            }
            response = requests.request("POST", uri, data=payload)
            return response

        if method == 'PUT':
            uri = self.api_url + endpoint
            reqbody = {
                "Name": table[0][0],
                "Job": table[0][1]
            }
            response = requests.request("PUT", uri, data=reqbody)
            return response

    def Get_Status_Code(self):
        status_code = response.status_code
        return status_code

    def Verify_GET(self, table):
        for row in table:
            first_name = row['First_name']
            last_name = row['Last_name']
            email = row['Mail-id']
            return first_name, last_name, email

    def Verify_POST(self, table):
        for row in table:
            name = row['Name']
            job = row['Job']
            return name, job

#Following method can be merged with POST, however for simplicity I kept it
    def Verify_PUT(self, table):
        for row in table:
            name = row['Name']
            job = row['Job']
            return name, job

    def Delete_Call(self, endpoint):
        uri = self.api_url + endpoint
        response = requests.request("DELETE", uri)
        return response

Step6: Create a Config file

A good tester is one who knows the use and importance of config files. In this framework, we are also going to use the config file. Here, we are just going to put the base URL in this config file and will be using the same in the utility file over and over again. The config file contains more data than just of base URL when you start exploring the framework and start automating the new endpoints then at some point, you will realize that some data can be added to the config file. 

Additionally, the purpose of the config files is to make tests more maintainable and reusable. Another benefit of a config file is that it makes the code more modular and easier to understand as all the configuration settings are stored in a separate file and it makes it easier to update the configuration settings for all the tests at once. 


     "APIURL": "https://reqres.in/"

Step7: Execute and Generate Allure Report

The reason behind using allure reports as a reporting medium is because the allure report provides detailed information about the test execution process and results which includes the test status, test steps, duration, and screenshots of the test run. The report is generated in HTML i.e. web format making it easy to share with team members and with clients and easy to understand. It provides a dashboard that is user-friendly having interactive charts and graphs that provide a detailed analysis of the test results. 

Let’s understand how to execute API tests and generate an allure report for automated API calls. To generate the report we will have to execute the test using the terminal or command line. There are two steps to follow sequentially they are as follows:

  1. behave Features/Api.feature -f allure_behave.formatter:AllureFormatter -o Report_Json

The purpose of the above command is to execute the test present in the mentioned feature file and generate a JSON report folder. 

  1. allure generate Report_Json -o Report_Html –clean

This command is used to generate an HTML report from the JSON report. So, that it is easy to understand and can be shared with team members or clients. 

Please find the attached GitHub repository link. I have uploaded the same project to this repository and also attached a Readme.md file which explains the framework and the different commands we have used so far in this project. 

https://github.com/spurqlabs/PythonBehaveApiFramework

Conclusion:

Before creating a framework it is very important to understand the concept and I hope I have provided enough information for the different queries on APIs. In conclusion, creating a BDD API testing framework using Python and Behave is easy to process if you know how to proceed further. By following the steps outlined in this blog I am sure you can create a powerful and flexible framework that will help you to define and execute the test cases, generate a detailed report with allure and also iterate with other testing tools and systems.  Again I am suggesting you check out the previous blog here because that will clear most of your doubts on automation testing frameworks and will help you to create your own automation testing framework. 

Read more blogs here


How to Create a BDD Automation Framework using Python Behave Library and Selenium

How to Create a BDD Automation Framework using Python Behave Library and Selenium

To deliver a good quality of work creating a robust software testing framework is a very important task. Every tester has his/her own approach or method to create a testing framework but the most common and important thing is creating a framework in such a manner that the other testers with minimal knowledge of automation testing can easily utilize the framework. While creating a framework there are some key points that we should consider you will find these points mentioned below. 

A good tester is one who has the ability to create a good testing framework. In this blog, I have explained how to create an automation testing framework. Even a beginner with minimal knowledge of automation testing can use this approach to create his own testing framework. There are many more things that you can implement in this explained framework so feel free to comment on it. 

When I started my journey as an SDET creating a framework was my first task assigned in my training so I can understand how important it is to create your own framework. Together in this blog, we will see the guidelines I have described which will help us to create a testing framework. 

Before we jump into the main topic of our discussion let’s just quickly see the steps we will be following while creating our own framework.

Key Considerations When Creating an Automation Testing Framework:

  1. Understanding the Requirements
  2. Selecting a Testing Framework
  3. Designing Test Cases
  4. Implementing Test Cases
  5. Executing Tests
  6. Maintaining and Improving the Framework

Among the various frameworks present one of the most popular frameworks used for automation testing i.e. the combination of python’s behave library and selenium. In this blog, we are going to explore how to build and use this framework for our automation testing. 

As everyone is familiar with Selenium which is an open source and one of the widely used tools for web automation testing along with Playwright and Cypress. Behave is a python library that is used for the BDD (Behavior Driven Development). Let’s just quickly explore what are the different frameworks present out there for automation testing. 

A software automation testing framework is designed to make the process of testing software more efficient and easy to use. Every framework has its own advantages and disadvantages as per the given requirement it is most important for us to choose the right framework for automation. Below you will find some of the most commonly used and popular automation frameworks.

Types of Test Automation Frameworks:

  1. Linear Scription Framework. 
  2. Modular Testing Framework. 
  3. Data-Driven Framework. 
  4. Keyword Driven Framework. 
  5. Hybrid Framework
  6. Behavior Driven Development Framework. 
  7. Test Driven Development Framework. 
Types of Automation Testing Framework.

In this blog, we will be building a BDD framework using Python’s behave library and selenium. In BDD we use the natural language to describe our test scenario divided into steps using the Gherkin language. These test scenarios are present in a feature file and because of the use of natural language, the behavior of the application is easily understandable by all. So, we can say that while creating a BDD framework one of the key components we should consider to use of the feature files and the step files. 

As described earlier a feature file is written in natural language with the help of Gherkin language by following a set format. While a step file is an implementation of the steps present in the feature file. Here, a step file is a python file and we can see that it is full of a set of functions where those functions correspond to the steps described in the feature file. Now that we have seen what is feature file and step file let’s see what is the use of python’s behave library here, so basically once the steps and feature file are ready the behave will start automatically matching the steps present in the feature file with its corresponding implementation in the step file and will also check for any assertion errors present.

Prerequisites for creating a framework:

  1. Python: https://www.python.org/downloads/ visit the site to download and install python in your system if it is not there. 
  1. Install Selenium and Behave using:

pip install selenium 

Pip install behave 

For more details please visit: https://pypi.org/project/behave/  &  https://pypi.org/project/selenium/ 

3. Pycharm IDE (Professional or Community): https://www.jetbrains.com/pycharm/download/ 

4. Install allure for report generating using:

pip install allure-behave 

For more details please visit: https://pypi.org/project/allure-behave/ 

5. We can also install all the required packages using the requirement.txt file using the below command. 

pip install -r requirement.txt

Framework Structure Overview: 

Here is the overview of our python selenium behave BDD framework. 

As a beginning, we are going to start with creating a simple framework using one scenario outline. In the next blog, we are going to see how to create an API testing framework using python. To understand both of them please read the blog carefully as I am explaining all the points here in natural language, without wasting any time let’s dive into the main topic of our discussion i.e. how to create python selenium behave BDD automation testing framework. 

For this, we will follow some guidelines which I have described as steps. 

Step 1: 

Create a project in Pycharm (here I am using Pycharm professional) and as mentioned in the prerequisites install the packages. 

 It is not compulsory to use pycharm professional we can use pycharm community as well. 

Step 2:

In this step, we will be creating a Features folder in which we will be creating our feature files for different scenarios. A feature file is something that holds your test cases in the form of a scenario and scenario outline. In this framework, we are using a scenario outline. Both scenario and scenario outline contain steps that are easy to understand for non-technical persons. We can also assign tags for the feature files and for the scenarios present in that file. Note that the feature file should end with a .feature  extension. 

Feature: Create test cases using Selenium with Python to automate below BMI calculator tests


#  We are using Scenario Outline in this feature as we can add multiple input data using examples.

  Scenario Outline: Calculating BMI value by passing multiple inputs
    Given I enter the "<Age>"
    When I Click on "<Gender>"
    And  I Enter a "<Height>"
    And  I Enter the "<Weight>"
    And  I Click on Calculate btn
    And  I Verify Result with "<Expected Result>"
    Examples:

      | Age | Gender  | Height  | Weight  | Expected Result |
      | 20  | Male    |  180    |  60     | BMI = 18.5 kg/m2|
      | 35  | Female  |  160    |  55     | BMI = 21.5 kg/m2|
      | 50  | Male    |  175    |  65     | BMI = 21.2 kg/m2|
      | 45  | Female  |  150    |  52     | BMI = 23.1 kg/m2|

Step 3:

Now, we have our feature file let’s create a step file to implement the steps described in the feature file.  In order to recognize the step file we are adding step work after the name so that behavior will come to know the step file for that particular feature file. Both feature files and step files are essential parts of the BDD framework. We have to be careful while describing the steps in the feature file because we have to use the same steps in the step file so that behavior will understand and map the step implementation. 

from behave import *

# The step file contains the implementation of the steps that we have described in the feature file.

@given('I enter the "{Age}"')
def step_impl(context, Age):
    context.bmipage.age_input(Age)

@when('I Click on "{Gender}"')
def step_impl(context, Gender):
    context.bmipage.gender_radio(Gender)

@step('I Enter a "{height}"')
def step_impl(context, height):
    context.bmipage.height_input(height)

@step('I Enter the "{weight}"')
def step_impl(context, weight):
    context.bmipage.weight_input(weight)

@step("I Click on Calculate btn")
def step_impl(context):
    context.bmipage.calculatebtn_click()

@step('I Verify Result with "{expresult}"')
def step_impl(context, expresult):
    context.bmipage.result_validation(expresult)

Step 4: 

In step 4 we will be creating a page file that contains all the locators and the action methods to perform the particular action on the web element. We are going to add all the locators at the class level only and will be using them in the respective methods. The reason behind doing so is it is a good practice to declare your locators at the class level as when the locators get changed it is effortless to replace them and we don’t have to go through the whole code again. 

from selenium.webdriver.common.by import By
import time
from Features.Pages.BasePage import BasePage


# The page contains all the locators and the actions to perform on that web element.
# In this page file we have declared all the locators at the class level and we are using them in the respective methods.

class BmiPage (BasePage):
    def __init__(self, context):
        BasePage.__init__(self, context.driver)
        self.context = context
        self.age_xpath = "//input[@id='cage']"
        self.height_xpath = "//input[@id='cheightmeter']"
        self.weight_xpath = "//input[@id='ckg']"
        self.calculatebtn_xpath = "//input[@value='Calculate']"
        self.actual_result_xpath = "//body[1]/div[3]/div[1]/div[4]/div[1]/b[1]"

    def age_input(self, Age):
        AgeInput = self.driver.find_element(By.XPATH, self.age_xpath)
        AgeInput.clear()
        AgeInput.send_keys(Age)
        time.sleep(2)

    def gender_radio(self, Gender):
       SelectGender = self.driver.find_element(By.XPATH, "//label[normalize-space()='" + Gender+"']")
        SelectGender.click()
        time.sleep(2)


    def height_input(self, height):
        HeightInput = self.driver.find_element(By.XPATH, self.height_xpath)
        HeightInput.clear()
        HeightInput.send_keys(height)
        time.sleep(3)

    def weight_input(self, weight):
        WeightInput = self.driver.find_element(By.XPATH, self.weight_xpath)
        WeightInput.clear()
        WeightInput.send_keys(weight)
        time.sleep(3)

    def calculatebtn_click(self):
        Calculatebtn = self.driver.find_element(By.XPATH, "//input[@value='Calculate']")
        Calculatebtn.click()
        time.sleep(3)

    def result_validation(self, expresult):
        try:
            Result = self.driver.find_element(By.XPATH, "//body[1]/div[3]/div[1]/div[4]/div[1]/b[1]")
            Actualresult = Result.text
            Expectedresult = expresult
            assert Actualresult == Expectedresult, "Expected Result Matched"
            time.sleep(5)
        except:
            self.driver.close()
            assert False, "Expected Result mismatched"

The next one is the base page file. We are creating a base page file to make an object of the driver so that we can easily use that for our page and environment file.

from selenium.webdriver.support.wait import WebDriverWait


# In the base page we are creating an object of driver.
# We are using this driver in the other pages and environment page.


class BasePage(object):
    def __init__(self, driver):
        self.driver = driver
        self.wait = WebDriverWait(self.driver, 30)
        self.implicit_wait = 25

Step 5:

This step is very important because we will be creating an environment file (i.e. Hooks file). This file contains hooks for before and after scenarios to start and close the browser. Also if you want you can add after-step hooks for capturing screenshots for reporting. We have added a method to capture screenshots after every step and will attach them to the allure report.

import json
import time

from allure_commons._allure import attach
from allure_commons.types import AttachmentType
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from Pages.BasePage import BasePage
from Pages.BmiPage import BmiPage

data = json.load(open("Resources/config.json"))


# This environment page is used as hooks page. Here we can notice that we have used before, after hooks along side with some step hooks.


def before_scenario(context, scenario):
    context.driver = webdriver.Chrome(ChromeDriverManager().install())
    time.sleep(5)
    basepage = BasePage(context.driver)
    context.bmipage = BmiPage(basepage)
    context.stepid = 1
    context.driver.get(data['BMIWEBURL'])
    context.driver.maximize_window()
    context.driver.implicitly_wait(3)


def after_step(context, step):
    attach(context.driver.get_screenshot_as_png(), name=context.stepid, attachment_type=AttachmentType.PNG)
    context.stepid = context.stepid + 1


def after_scenario(context, scenario):
    context.driver.close()

Step 6:

It is a good practice to store all our common data and files in a resource folder. So, whenever we need to make changes it will be easy to implement them for the whole framework. For now, we are adding a config.json file in the resource folder. This file contains the web URL used before the scenario to launch the web page for the specified tag in the feature file. The file is written in JSON format. 

  “BMIWEBURL”: “https://www.calculator.net/bmi-calculator.html?ctype=metric”

Congratulations, finally we have created our own Python Selenium Behave BDD framework. As I mentioned earlier we will be using Allure for reporting the test result. For this use the below command in the terminal and it will generate the result folder for you. 

behave Features/BMICalculator.feature -f allure_behave.formatter:AllureFormatter -o Report_Json

To convert the JSON file into readable HTML format use the below command. 

allure generate Report_Json -o Report_Html –clean

Allure Report
Allure Behaviours

I am adding a GitHub repository link so that if anyone has any issues while building it, you can go through the source code here: https://github.com/ydhole-spurqlabs/SeleniumPython

Conclusion: 

Creating a testing framework is very important as well as feels like a tedious task but with the right guidelines, everyone can create a testing framework. I hope in this blog I have provided all the answers related to the python selenium behavior automation testing framework. Here, we choose a BDD framework over other existing frameworks because of its better understanding, easy to adapt, and easy to understand for end users. If you still have any issues related to what we have seen earlier feel free to comment them down we will solve them together. There are many more things we can add to this existing framework but to get started I feel this framework is enough and will cover most of the requirements. 

Read more blogs here.