Automating API test suite execution through CI/CD pipelines provides a significant advantage over local execution. By leveraging CI/CD, teams can obtain test results for all systems, improving the speed, quality, and reliability of tests. Manual triggering of API suite execution is not required, freeing up valuable time for team members.
In this blog post, we will guide you through the creation of a workflow file using GitHub Actions for your automated API tests. However, before diving into the creation of a CI/CD workflow, it’s essential to understand some crucial points for a better grasp of the concept.
Before we start creating a CI/CD workflow for our API tests I will suggest you first go through the API test automation framework here and also read this blog on creating a web test automation framework as it helps you to understand the different points which we all should consider before selecting the test automation framework. The API test automation framework is in Python language and has Behave library for BDD purposes.
Let’s understand some basic and important points to start with the CI/CD workflow.
What is DevOps?
DevOps is a set of practices and tools that integrate and automate tasks in the software development and IT industry. It establishes communication and collaboration between development and operations teams, enabling faster and more reliable software build, testing, and release processes. DevOps is a methodology that derives its name from the combination of “Development” and “Operations.”
The primary goal of DevOps is to bridge the gap between development and operations teams by fostering a culture of shared responsibility and collaboration. This helps to reduce the time it takes to develop, test, and deploy software while maintaining high quality and reliability standards. By automating manual processes and eliminating silos between teams, DevOps enables organizations to respond more quickly to changing market demands and customer needs.
CI/CD refers to Continuous Integration and Continuous Delivery, which are processes and practices that help to deliver code changes more frequently and reliably. These processes involve automating the building, testing, and deployment of code changes, resulting in faster and higher-quality software releases for end-users.
The CI/CD pipeline follows a workflow that starts with continuous integration (CI), followed by continuous delivery (CD). The CI process involves integrating code changes into a shared repository and automatically building and testing them to identify errors early in the development process. Once the code has been tested and approved, the CD process takes over and automates the delivery of code changes to production environments.
The CI/CD pipeline workflow helps to reduce the risks and delays associated with manual code integration and deployment while ensuring that the changes are tested and delivered quickly and reliably. This approach enables organizations to innovate faster, respond more quickly to market demands, and improve overall software quality.
Process:
What are GitHub Actions?
GitHub Actions is a feature that makes it easy to automate software workflows, including world-class CI/CD capabilities. With GitHub Actions, you can build, test, and deploy your code directly from GitHub, while also customizing code reviews, branch management, and issue-triaging workflows to suit your needs.
The GitHub platform offers integration with GitHub Actions, providing flexibility for customizing workflows to automate tasks such as building, testing, and deploying code. Developers can create custom workflows using GitHub Actions that are automatically triggered when specific events occur, such as code push, pull request merge, or as per a defined schedule.
Workflows are defined using YAML syntax, which is a human-readable data serialization language. YAML is commonly used for configuration files and in applications to store or transmit data. To learn more about YAML syntax and its history, please visit the following link.
Advantages / Benefits of using GitHub Actions for CI/CD Pipeline:
Seamless integration: GitHub Actions seamlessly integrates with GitHub repositories, making it easy to automate workflows and tasks directly from the repository.
Highly customizable: GitHub Actions offers a high degree of customization, allowing developers to create workflows that suit their specific needs.
Time-saving: GitHub Actions automates many tasks in the software development process, saving developers time and reducing the potential for errors.
Flexible: GitHub Actions can be used for a wide range of tasks, including building, testing, and deploying applications.
Workflow visualization: GitHub Actions provides a graphical representation of workflows, making it easy for developers to visualize and understand the process.
Large community: GitHub Actions has a large and active community, providing a wealth of resources, documentation, and support for developers.
Cost Saving: GitHub Actions come bundled with Github free and enterprise licenses reducing the cost of maintaining separate CI/CD tools like Jenkins
Framework Overview:
This is a BDD API automation testing framework. The reason behind choosing the BDD framework is simple it provides you the following benefits over other testing frameworks.
Improved Collaboration
Increased Test coverage
Better Test Readability
Easy Test Maintenance
Faster Feedback
Integration with Other Tools
Focus on Business Requirements
Discover what are the different types of automation testing frameworks available and why to prefer the BDD framework over others here.
Framework Explanation:
The framework is simple because we included a feature file written in the Gherkin language, as you will notice. Basically, Gherkin is a simple plain text language with a simple structure. The feature file is easy to understand for a non-technical person and that is why we prefer the BDD framework for automation. To learn more about the Gherkin language please visit the official site here https://cucumber.io/docs/gherkin/reference/. Also, we have included the POST, GET, PUT & DELETE API methods. A feature file describes all these methods using simple and understandable language.
The next component of our framework is the step file. The feature and step files are the two main and most essential parts of the BDD framework. The step file contains the implementation of the steps mentioned in the feature file. It maps the respective steps from the feature file and executes the code.We use the behave library to achieve this. The behave understands the maps of the steps with the feature file steps as both steps have the same language structure.
Then there is the utility file which contains the methods which we can use more repeatedly. There is one configuration file where we store the commonly used data. Furthermore, to install all the dependencies, we have created a requirement.txt file which contains the packages with specific versions. To install the packages from the requirement.txt file we have the following command.
pip install -r requirement.txt
The above framework is explained in detail here. I suggest you please check out the blog first and understand the framework then we can move further with the workflow detail description. A proper understanding of the framework is essential to understand how to create the CI/CD workflow file.
How to create a Workflow File?
Create a GitHub repository for your framework
Push your framework to that repository
Click on the Action Button
Click on set workflow your self option
Give a proper name to the workflow file
“Additionally, please check out the below video for a detailed step understanding.” The video will show you how to create workflow files and the steps need to follow to do so.
github actions workflow file creation
Components of CI/CD Workflow File:
Events:
Events are responsible to trigger the CI/CD workflow file. They are nothing but the actions that happen in the repository for example pushing to the branch or creating a pull request. Please check the below sample events that trigger the CI/CD workflow file.
push: This event is triggered when someone pushes code to a branch in your repository.
pull_request: This event is triggered when someone opens a new pull request or updates an existing one.
schedule: This event is triggered on a schedule that you define in your workflow configuration file.
workflow_dispatch: This event allows you to manually trigger a workflow by clicking a button in the GitHub UI.
release: This event is triggered when a new release is created in your repository.
repository_dispatch: This event allows you to trigger a workflow using a custom webhook event.
page_build: This event is triggered when GitHub Pages are built or rebuilt.
issue_comment: This event is triggered when someone comments on an issue in your repository.
pull_request_review: This event is triggered when someone reviews a pull request in your repository.
push_tag: This event is triggered when someone pushes a tag to your repository.
To know more about the events that trigger workflows please check out the GitHub official documentation here
Jobs:
After setting up the events to trigger the workflow the next step is to set up the job for the workflow. The job consists of a set of steps that performs specific tasks. For every job, there is a separate runner or we can call it a virtual machine (VM) therefore each job can run parallelly. This allows us to execute multiple tasks concurrently.
A workflow can have more than one job with a unique name and set of steps that define the actions to perform. For example, we can use a job in the workflow file to build the project, test its functionality, and deploy it to a server. The defined jobs in the workflow file can be dependent on each other. Also, they can have different requirements than the others like specific operating systems, software dependencies or packages, or environment variables.
Discover more about using jobs in a workflow from GitHub’s official documentation here
Runners:
To execute the jobs we need runners. The runners in GitHub actions are nothing but virtual machines or physical servers. GitHub categorizes them into two parts named self-hosted or provided by GitHub. Moreover, the runners are responsible for running the steps described in the job.
The self-hosed runners allow us to execute the jobs on our own system or infrastructure for example our own physical servers, virtual machines, or containers. We use self-hosted runners when we need to run jobs on specialized hardware requirements that must be met.
GitHub-hosted runners are provided by GitHub itself and can be used for free by anyone. These runners are available in a variety of configurations. Furthermore, the best thing about GitHub-hosted runners is that they automatically update with the latest software updates and security patches.
Learn more about runners for GitHub actions workflow here from GitHub’s official documentation.
Steps:
Steps in the workflow file are used to carry out particular actions. Subsequently, after adding the runner to the workflow file, we define these steps with the help of the steps property in the workflow file. Additionally, the steps consist of actions and commands to perform on the build. For example, there are steps to download the dependencies, check out the build, run the test, upload the artifacts, etc.
Learn more about the steps used in the workflow file from GitHub’s official documentation here
Actions:
In the GitHub actions workflow file, we use actions that are reusable code modules that can be shared across different workflows and repositories. One or more steps are defined under actions to perform specific tasks such as running tests, building the project, or deploying the code. We can also define the input and output parameters to the actions which help us to receive and return the data from other steps in the workflow. Developers describe the actions, and they are available on GitHub Marketplace. To use an action in the workflow, we need to use the uses property.
Find out more about actions for GitHub actions from GitHub’s official documentation here
Now we have covered all the basic topics that we need to understand before creating our CI/CD workflow file for the API automation framework. Now, let’s start explaining the workflow file.
We use the name property to give the name to the workflow file. It is a good practice to give a proper name to your workflow file. Generally, the name is related to the feature or the repository name.
name: Python API CI/CD Pipeline
Event:
Now we have to set up the event that triggers the workflow file. In this workflow, I have added two events for your reference. The pipeline will trigger the push event for the ‘main‘ branch. Additionally, I added the scheduled event to automatically trigger the workflow as per the set schedule.
The above schedule indicates that the pipeline Runs at 12:00. Action schedules run at most every 5 minutes using UTC time.
We can customize the schedule timing as per our needs. Check out the following chron specification diagram to learn how to set the schedule timing.
Job:
The job we are setting here is to build. We want to build the project and perform the required tasks as we merge new code changes.
jobs:
build:
Runner:
The runner we are using here is a GitHub-hosted runner. In this workflow, we are using a Windows-latest virtual machine. The VM will build the project, and then it will execute the defined steps.
runs-on: windows-latest
Apart from Windows-latest, there are other runners too like ubuntu-latest, macos-latest, and self-hosted. The self-hosted runner is one that we can set up on our own infrastructure, such as our own server, or virtual machine, allowing us to have more control over the environment and resources.
Steps:
The steps are the description of what are the different actions required to perform on the project build. Here, the first action we are performing is to check out the repository so that it can have the latest build code.
steps:
- uses: actions/checkout@v3
Then we are setting up the Python. As this framework is an API automation testing framework using Python and Behave so we need Python to execute the tests.
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.8.9'
After we install Python, we also need to install the different packages required to run the API tests. Define these packages in the requirement.txt file, and we can install them using the following command.
As of now, we have installed all the packages and we can now run our API tests. We are running these tests with the help of the allure behave command so that once the execution is completed it will generate a Report_Json folder which is required to generate the HTML report.
- name: run test
run: behave Features -f allure_behave.formatter:AllureFormatter -o Report_Json
working-directory: .
continue-on-error: true
Here, we cannot share the generated Report_Json folder as a report. To generate the shareable report we need to convert the JSON folder to that of the HTML report.
Please find the attached GitHub repository link. I have uploaded the same project to this repository and also attached a Readme file that explains the framework and the different commands we have used so far in this project. Also, the workflow explanation is included for better understanding.
Conclusion:
In conclusion, creating a CI/CD pipeline workflow for your project using GitHub Actions streamlines the development and testing process by automating tasks such as building the project for new changes, testing the build, and deploying the code. This results in reduced time and minimized errors, ensuring that your software performance is at its best.
GitHub Actions provides a wide range of pre-built actions and the ability to create custom actions that suit your requirements. By following established practices and continuously iterating on workflows, you can ensure your software delivery is optimized and reliable.
I hope in this blog I have provided the answers to the most commonly asked question and I hope this will help you to start creating your CI/CD pipelines for your projects. Do check out the blogs on how to create a BDD framework for Web Automation and API automation for a better understanding of automation frameworks and how a robust framework can be created.
API’s the term we heard a lot and wanted to know more about it. The questions that come to our mind are what is it? Why is it so important? How to test it? So, let’s just explore these questions one by one. API testing is accessible only if you know what to test and how to test. Again, a proper framework will help you to achieve your goals and deliver a good quality of work. The importance of automation framework and the factors we should consider for choosing the proper framework are described in our previous blog. Please go through the blog here, then you can start reading this blog because you will have a good understanding of automation testing frameworks.
To build the API testing framework we will be using the BDD approach. Again, why I have chosen a BDD framework for API testing the reason is very simple the BDD approach provides you with an easy understanding of the framework, you can easily maintain the framework and they have feature files that are very easy to understand for a non-technical person.
What is API?
API (Application Programming Interface) is like a mechanism that works between the two software components and helps them to communicate with each other. The communication happened using sets of definitions and set protocols. In simple language, API works as an intermediate between two systems and helps them exchange data and communicate. The working mechanism of Rest API is straightforward they work by sending requests and receiving a response in a standardized format. Usually, the standardized format used for Rest API is JSON i.e. (JavaScript Object Notation)
Let’s understand it better with an example. Consider you are using a ticket booking app to book a flight ticket. As the app is connected to the internet so it will set data to the server. When the server receives the data it interprets it and performs the required actions and sends it back to your device. Then the application translates that data and display the information in a readable way. So this is how API works. I hope you have understood the working mechanism of API’s now let’s discuss the next topic i.e.
What is API Testing?
As we have understood what is an API and how they work so let’s see why their testing is important. Basically, API testing is a part of software testing that includes the testing of the functionality, reliability, security, and performance of API. API is used for data transfer and to establish communication between the two systems so testing APIs includes verifying that the APIs are meeting its requirement, performing as per the expectations, and can handle a variety of inputs. This testing provides you the information that the API’s functionality is correct and efficient and the data they return is accurate and consistent.
Why is API Testing Important?
API testing is an important part of a Software testing process as it helps you to understand the functionality of the working APIs and validate any defect present before the application is released to the end users. The other key reasons why API testing is important to include:
Ensuring Functionality
Validating data integrity
Enhancing the Security
Improving the Performance
Detecting Bugs and Issues
Improving readability and stability
Facilitating integration and collaboration
All the above-mentioned points get checked and validate in API testing. Till now we have discussed what is api, what is api testing, and why it is important. Let’s see what different tools are available to conduct the manual as well as automation testing of API.
Tools for Manual API Testing:
Postman
SoapUI
Insomnia
Paw
Advanced REST Client (ARC)
Fiddler
cURL
Tools for API Automation Testing:
Postman
SoapUI
RestAssured
RestSharp
Apache HTTP client
JMeter
Karate
Newman
Pact.js
Cypress.js
These are just a few examples of the tools available for both manuals as well as automation testing of API. Each mentioned tool has its own strength and weakness and the choice of the right tool for your API testing depends upon the requirement and the specific needs of the project. These tools will help us to ensure that the APIs meet the desired functionality and performance requirements.
Now we are more familiar with APIs so let’s start the main topic of our discussion and i.e. Python Behave API Testing BDD Framework.
Framework Overview:
To validate all the above-mentioned points creating a robust API testing framework is very essential. With the help of the below-mentioned steps, you will come to know how to create your own API testing framework. Here, we are going to create a BDD framework. Please go through this blog before starting to read this blog as the previous blog will help you to understand the advantages of BDD and this blog is linked to the previous blog topics. You can read the previous blog here.
This framework structure contains a feature file, a step file, and a utility file. We will be discussing all these terms shortly. To create such a framework you need to follow certain steps to make your work tedious-free and easy.
Install all the required packages using the below command as long as you have all the packages mentioned in rquirement.txt with the right version number
We can also install the mentioned packages from the settings of Pycharm IDE
Step2: Creating Project
After understanding the prerequisites the next step is to create a project in our IDE. Here I am using a Pycharm Professional IDE. As mentioned in the above step, we will install the packages mentioned in the requirement.txt file. Please note it is not compulsory to use Pycharm Professional IDE to create this framework you can use the community version too.
Step3: Creating a Feature File
In this, we will be creating a feature file. A feature file consists of steps. These steps are mentioned in the gherkin language. The feature is easy to understand and can be written in the English language so that a non-technical person can understand the flow of the test scenario. In this framework we will be automating the four basic API request methods i.e. POST, PUT, GET and DELETE. We are taking https://reqres.in/
We can assign tags to our scenarios mentioned in the feature file to run particular test scenarios based on the requirement. The key point you must notice here is the feature file should end with .feature extension. We will be creating four different scenarios for the four different API methods.
Feature: User API
Verify the GET PUT POST DELETE methods of User API
@api
Scenario: Verify GET call for single user
When User sends "GET" call to endpoint "api/users/2"
Then User verifies the status code is "200"
And User verifies GET response contains following information
| First_name | Last_name | Mail-id |
| Janet | Weaver | janet.weaver@reqres.in |
@api
Scenario: Verify POST call for single user
When User sends "POST" call to endpoint "api/users"
| Name | Job |
| Yogesh | SDET |
Then User verifies the status code is "201"
And User verifies POST response body contains following information
| Name | Job |
| Yogesh | SDET |
@api
Scenario: Verify PUT call for single user
When User sends "PUT" call to endpoint "api/users/2"
| Name | Job |
| Yogesh | SDET |
Then User verifies the status code is "200"
And User verifies PUT response body contains following information
| Name | Job |
| Yogesh | SDET |
@api
Scenario: Verify DELETE call for single user
When User sends DELETE call to the endpoint "api/users/2"
Then User verifies the status code is "200"
Step4: Creating a Step File
Unlike the automation framework which we have built in the previous blog, we will be creating a single-step file for all the feature files. In the BDD framework, the step files are used to map and implement the steps described in the feature file. Python’s behave library is very accurate to map the steps with the steps described in the feature file. We will be describing the same steps in the step file as they have described in the feature file so that behave will come to know the step implementation for the particular steps present in the feature file.
from behave import *
from Utility.API_Utility import API_Utility
api_util = API_Utility()
@when('User sends "{method}" call to endpoint "{endpoint}"')
def step_impl(context, method, endpoint):
global response
response = api_util.Method_Call(context.table, method, endpoint)
@then('User verifies the status code is "{status_code}"')
def step_impl(context, status_code):
actual_status_code = response.status_code
assert actual_status_code == int(status_code)
@step("User verifies GET response contains following information")
def step_impl(context):
api_util.Verify_GET(context.table)
response_body = response.json()
assert response_body['data']['first_name'] == context.table[0][0]
assert response_body['data']['last_name'] == context.table[0][1]
assert response_body['data']['email'] == context.table[0][2]
@step("User verifies POST response body contains following information")
def step_impl(context):
api_util.Verify_POST(context.table)
response_body = response.json()
assert response_body['name'] == context.table[0][0]
assert response_body['job'] == context.table[0][1]
@step("User verifies PUT response body contains following information")
def step_impl(context):
api_util.Verify_PUT(context.table)
response_body = response.json()
assert response_body['Name'] == context.table[0][0]
assert response_body['Job'] == context.table[0][1]
@when('User sends DELETE call to the endpoint "{endpoint}"')
def step_impl(context, endpoint):
api_util.Delete_Call(endpoint)
Step5: Creating Utility File
Till now we have successfully created a feature file and a step file now in this step we will be creating a utility file. Generally, in Web automation, we have page files that contain the locators and the actions to perform on the web elements but in this framework, we will be creating a single utility file just like the step file. The utility file contains the API methods and the endpoints to perform the specific action like, POST, PUT, GET, or DELETE. The request body i.e. payload and the response body will be captured using the methods present in the utility file. So the reason these methods are created in the utility file is that we can use them multiple times and don’t have to create the same method over and over again.
import json
import requests
class API_Utility:
data = json.load(open("Resources/config.json"))
api_url = data["APIURL"]
global response
def Method_Call(self, table, method, endpoint):
if method == 'GET':
uri = self.api_url + endpoint
response = requests.request("GET", uri)
return response
if method == 'POST':
uri = self.api_url + endpoint
payload = {
"name": table[0][0],
"job": table[0][1]
}
response = requests.request("POST", uri, data=payload)
return response
if method == 'PUT':
uri = self.api_url + endpoint
reqbody = {
"Name": table[0][0],
"Job": table[0][1]
}
response = requests.request("PUT", uri, data=reqbody)
return response
def Get_Status_Code(self):
status_code = response.status_code
return status_code
def Verify_GET(self, table):
for row in table:
first_name = row['First_name']
last_name = row['Last_name']
email = row['Mail-id']
return first_name, last_name, email
def Verify_POST(self, table):
for row in table:
name = row['Name']
job = row['Job']
return name, job
#Following method can be merged with POST, however for simplicity I kept it
def Verify_PUT(self, table):
for row in table:
name = row['Name']
job = row['Job']
return name, job
def Delete_Call(self, endpoint):
uri = self.api_url + endpoint
response = requests.request("DELETE", uri)
return response
Step6: Create a Config file
A good tester is one who knows the use and importance of config files. In this framework, we are also going to use the config file. Here, we are just going to put the base URL in this config file and will be using the same in the utility file over and over again. The config file contains more data than just of base URL when you start exploring the framework and start automating the new endpoints then at some point, you will realize that some data can be added to the config file.
Additionally, the purpose of the config files is to make tests more maintainable and reusable. Another benefit of a config file is that it makes the code more modular and easier to understand as all the configuration settings are stored in a separate file and it makes it easier to update the configuration settings for all the tests at once.
"APIURL": "https://reqres.in/"
Step7: Execute and Generate Allure Report
The reason behind using allure reports as a reporting medium is because the allure report provides detailed information about the test execution process and results which includes the test status, test steps, duration, and screenshots of the test run. The report is generated in HTML i.e. web format making it easy to share with team members and with clients and easy to understand. It provides a dashboard that is user-friendly having interactive charts and graphs that provide a detailed analysis of the test results.
Let’s understand how to execute API tests and generate an allure report for automated API calls. To generate the report we will have to execute the test using the terminal or command line. There are two steps to follow sequentially they are as follows:
The purpose of the above command is to execute the test present in the mentioned feature file and generate a JSON report folder.
allure generate Report_Json -o Report_Html –clean
This command is used to generate an HTML report from the JSON report. So, that it is easy to understand and can be shared with team members or clients.
Please find the attached GitHub repository link. I have uploaded the same project to this repository and also attached a Readme.md file which explains the framework and the different commands we have used so far in this project.
Before creating a framework it is very important to understand the concept and I hope I have provided enough information for the different queries on APIs. In conclusion, creating a BDD API testing framework using Python and Behave is easy to process if you know how to proceed further. By following the steps outlined in this blog I am sure you can create a powerful and flexible framework that will help you to define and execute the test cases, generate a detailed report with allure and also iterate with other testing tools and systems. Again I am suggesting you check out the previous blog here because that will clear most of your doubts on automation testing frameworks and will help you to create your own automation testing framework.
To deliver a good quality of work creating a robust software testing framework is a very important task. Every tester has his/her own approach or method to create a testing framework but the most common and important thing is creating a framework in such a manner that the other testers with minimal knowledge of automation testing can easily utilize the framework. While creating a framework there are some key points that we should consider you will find these points mentioned below.
A good tester is one who has the ability to create a good testing framework. In this blog, I have explained how to create an automation testing framework. Even a beginner with minimal knowledge of automation testing can use this approach to create his own testing framework. There are many more things that you can implement in this explained framework so feel free to comment on it.
When I started my journey as an SDET creating a framework was my first task assigned in my training so I can understand how important it is to create your own framework. Together in this blog, we will see the guidelines I have described which will help us to create a testing framework.
Before we jump into the main topic of our discussion let’s just quickly see the steps we will be following while creating our own framework.
Key Considerations When Creating an Automation Testing Framework:
Understanding the Requirements
Selecting a Testing Framework
Designing Test Cases
Implementing Test Cases
Executing Tests
Maintaining and Improving the Framework
Among the various frameworks present one of the most popular frameworks used for automation testing i.e. the combination of python’s behave library and selenium. In this blog, we are going to explore how to build and use this framework for our automation testing.
As everyone is familiar with Selenium which is an open source and one of the widely used tools for web automation testing along with Playwright and Cypress. Behave is a python library that is used for the BDD (Behavior Driven Development). Let’s just quickly explore what are the different frameworks present out there for automation testing.
A software automation testing framework is designed to make the process of testing software more efficient and easy to use. Every framework has its own advantages and disadvantages as per the given requirement it is most important for us to choose the right framework for automation. Below you will find some of the most commonly used and popular automation frameworks.
Types of Test Automation Frameworks:
Linear Scription Framework.
Modular Testing Framework.
Data-Driven Framework.
Keyword Driven Framework.
Hybrid Framework
Behavior Driven Development Framework.
Test Driven Development Framework.
In this blog, we will be building a BDD framework using Python’s behave library and selenium. In BDD we use the natural language to describe our test scenario divided into steps using the Gherkin language. These test scenarios are present in a feature file and because of the use of natural language, the behavior of the application is easily understandable by all. So, we can say that while creating a BDD framework one of the key components we should consider to use of the feature files and the step files.
As described earlier a feature file is written in natural language with the help of Gherkin language by following a set format. While a step file is an implementation of the steps present in the feature file. Here, a step file is a python file and we can see that it is full of a set of functions where those functions correspond to the steps described in the feature file. Now that we have seen what is feature file and step file let’s see what is the use of python’s behave library here, so basically once the steps and feature file are ready the behave will start automatically matching the steps present in the feature file with its corresponding implementation in the step file and will also check for any assertion errors present.
5. We can also install all the required packages using the requirement.txt file using the below command.
pip install -r requirement.txt
Framework Structure Overview:
Here is the overview of our python selenium behave BDD framework.
As a beginning, we are going to start with creating a simple framework using one scenario outline. In the next blog, we are going to see how to create an API testing framework using python. To understand both of them please read the blog carefully as I am explaining all the points here in natural language, without wasting any time let’s dive into the main topic of our discussion i.e. how to create python selenium behave BDD automation testing framework.
For this, we will follow some guidelines which I have described as steps.
Step 1:
Create a project in Pycharm (here I am using Pycharm professional) and as mentioned in the prerequisites install the packages.
It is not compulsory to use pycharm professional we can use pycharm community as well.
Step 2:
In this step, we will be creating a Features folder in which we will be creating our feature files for different scenarios. A feature file is something that holds your test cases in the form of a scenario and scenario outline. In this framework, we are using a scenario outline. Both scenario and scenario outline contain steps that are easy to understand for non-technical persons. We can also assign tags for the feature files and for the scenarios present in that file. Note that the feature file should end with a .feature extension.
Feature: Create test cases using Selenium with Python to automate below BMI calculator tests
# We are using Scenario Outline in this feature as we can add multiple input data using examples.
Scenario Outline: Calculating BMI value by passing multiple inputs
Given I enter the "<Age>"
When I Click on "<Gender>"
And I Enter a "<Height>"
And I Enter the "<Weight>"
And I Click on Calculate btn
And I Verify Result with "<Expected Result>"
Examples:
| Age | Gender | Height | Weight | Expected Result |
| 20 | Male | 180 | 60 | BMI = 18.5 kg/m2|
| 35 | Female | 160 | 55 | BMI = 21.5 kg/m2|
| 50 | Male | 175 | 65 | BMI = 21.2 kg/m2|
| 45 | Female | 150 | 52 | BMI = 23.1 kg/m2|
Step 3:
Now, we have our feature file let’s create a step file to implement the steps described in the feature file. In order to recognize the step file we are adding step work after the name so that behavior will come to know the step file for that particular feature file. Both feature files and step files are essential parts of the BDD framework. We have to be careful while describing the steps in the feature file because we have to use the same steps in the step file so that behavior will understand and map the step implementation.
from behave import *
# The step file contains the implementation of the steps that we have described in the feature file.
@given('I enter the "{Age}"')
def step_impl(context, Age):
context.bmipage.age_input(Age)
@when('I Click on "{Gender}"')
def step_impl(context, Gender):
context.bmipage.gender_radio(Gender)
@step('I Enter a "{height}"')
def step_impl(context, height):
context.bmipage.height_input(height)
@step('I Enter the "{weight}"')
def step_impl(context, weight):
context.bmipage.weight_input(weight)
@step("I Click on Calculate btn")
def step_impl(context):
context.bmipage.calculatebtn_click()
@step('I Verify Result with "{expresult}"')
def step_impl(context, expresult):
context.bmipage.result_validation(expresult)
Step 4:
In step 4 we will be creating a page file that contains all the locators and the action methods to perform the particular action on the web element. We are going to add all the locators at the class level only and will be using them in the respective methods. The reason behind doing so is it is a good practice to declare your locators at the class level as when the locators get changed it is effortless to replace them and we don’t have to go through the whole code again.
from selenium.webdriver.common.by import By
import time
from Features.Pages.BasePage import BasePage
# The page contains all the locators and the actions to perform on that web element.
# In this page file we have declared all the locators at the class level and we are using them in the respective methods.
class BmiPage (BasePage):
def __init__(self, context):
BasePage.__init__(self, context.driver)
self.context = context
self.age_xpath = "//input[@id='cage']"
self.height_xpath = "//input[@id='cheightmeter']"
self.weight_xpath = "//input[@id='ckg']"
self.calculatebtn_xpath = "//input[@value='Calculate']"
self.actual_result_xpath = "//body[1]/div[3]/div[1]/div[4]/div[1]/b[1]"
def age_input(self, Age):
AgeInput = self.driver.find_element(By.XPATH, self.age_xpath)
AgeInput.clear()
AgeInput.send_keys(Age)
time.sleep(2)
def gender_radio(self, Gender):
SelectGender = self.driver.find_element(By.XPATH, "//label[normalize-space()='" + Gender+"']")
SelectGender.click()
time.sleep(2)
def height_input(self, height):
HeightInput = self.driver.find_element(By.XPATH, self.height_xpath)
HeightInput.clear()
HeightInput.send_keys(height)
time.sleep(3)
def weight_input(self, weight):
WeightInput = self.driver.find_element(By.XPATH, self.weight_xpath)
WeightInput.clear()
WeightInput.send_keys(weight)
time.sleep(3)
def calculatebtn_click(self):
Calculatebtn = self.driver.find_element(By.XPATH, "//input[@value='Calculate']")
Calculatebtn.click()
time.sleep(3)
def result_validation(self, expresult):
try:
Result = self.driver.find_element(By.XPATH, "//body[1]/div[3]/div[1]/div[4]/div[1]/b[1]")
Actualresult = Result.text
Expectedresult = expresult
assert Actualresult == Expectedresult, "Expected Result Matched"
time.sleep(5)
except:
self.driver.close()
assert False, "Expected Result mismatched"
The next one is the base page file. We are creating a base page file to make an object of the driver so that we can easily use that for our page and environment file.
from selenium.webdriver.support.wait import WebDriverWait
# In the base page we are creating an object of driver.
# We are using this driver in the other pages and environment page.
class BasePage(object):
def __init__(self, driver):
self.driver = driver
self.wait = WebDriverWait(self.driver, 30)
self.implicit_wait = 25
Step 5:
This step is very important because we will be creating an environment file (i.e. Hooks file). This file contains hooks for before and after scenarios to start and close the browser. Also if you want you can add after-step hooks for capturing screenshots for reporting. We have added a method to capture screenshots after every step and will attach them to the allure report.
import json
import time
from allure_commons._allure import attach
from allure_commons.types import AttachmentType
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from Pages.BasePage import BasePage
from Pages.BmiPage import BmiPage
data = json.load(open("Resources/config.json"))
# This environment page is used as hooks page. Here we can notice that we have used before, after hooks along side with some step hooks.
def before_scenario(context, scenario):
context.driver = webdriver.Chrome(ChromeDriverManager().install())
time.sleep(5)
basepage = BasePage(context.driver)
context.bmipage = BmiPage(basepage)
context.stepid = 1
context.driver.get(data['BMIWEBURL'])
context.driver.maximize_window()
context.driver.implicitly_wait(3)
def after_step(context, step):
attach(context.driver.get_screenshot_as_png(), name=context.stepid, attachment_type=AttachmentType.PNG)
context.stepid = context.stepid + 1
def after_scenario(context, scenario):
context.driver.close()
Step 6:
It is a good practice to store all our common data and files in a resource folder. So, whenever we need to make changes it will be easy to implement them for the whole framework. For now, we are adding a config.json file in the resource folder. This file contains the web URL used before the scenario to launch the web page for the specified tag in the feature file. The file is written in JSON format.
Congratulations, finally we have created our own Python Selenium Behave BDD framework. As I mentioned earlier we will be using Allure for reporting the test result. For this use the below command in the terminal and it will generate the result folder for you.
Creating a testing framework is very important as well as feels like a tedious task but with the right guidelines, everyone can create a testing framework. I hope in this blog I have provided all the answers related to the python selenium behavior automation testing framework. Here, we choose a BDD framework over other existing frameworks because of its better understanding, easy to adapt, and easy to understand for end users. If you still have any issues related to what we have seen earlier feel free to comment them down we will solve them together. There are many more things we can add to this existing framework but to get started I feel this framework is enough and will cover most of the requirements.
For any web automation testing, the one and most important task is to identify and use robust locators to identify web elements so that your automated tests do not fail with “Unable to locate element”. In this article, we are providing you with the techniques that every tester should learn to create those robust locators. As we already know this can be done using different locator strategies. In this blog, we are going to learn about XPath. Before we dive into the topic of our discussion let’s just get more familiar with Xpaths. Let’s start with,
What is XPath?
XPath (XML Path Language) is an expression language that allows the processing of values conforming to the data model defined in the XQuery and XPath Data models. Basically, it is a query language that we use to locate or find an element present on the webpage. It is defined by the World Wide Consortium (W3C). Now, let’s discuss why Xpaths are necessary.
Why is XPaths necessary?
Xpaths are the most widely used locators in automation though there are other locators like id, name, class name, tag name, and so on. Also, it is used when there are no unique attributes available to locate the web element. It allows identification with the help of the visible test present on the screen with the help of Xpath function text().
Before explaining the importance of XPath let’s just go through the different types of locators available for automation testing.
In this blog, we will learn about the different types of Xpaths and how to implement them so that we can locate our web elements quickly using the selenium web driver. Basically, there are two types of Xpaths
1. Absolute XPath:
In this type, The XPath starts from the beginning or from the root node of the HTML DOM structure. It is a direct way to locate or find the web element but the disadvantage of absolute XPath is that as we are creating it from the start of the HTML DOM structure if there are any changes introduced in the created path of the web element then it gets failed. In this type of locator, we only use tags or nodes. The main advantage of this is that we can select a web element from the root node as it starts with the single forward slash “ / ”.
Example:
Here is an example of an absolute Xpath for an input field box.
The absolute XPath is: /html[1]/body[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[2]/div[2]/form[1]/div[1]/div[1]/div[2]/input[1]
2. Relative Xpath:
Compared to an absolute XPath the relative XPath does not start from the beginning of the HTML DOM structure. It starts from where the element is present e.g. from the middle of the HTML DOM structure if the element is located there. We don’t have to travel from the start of the HTML DOM structure. The relative Xpath starts with a double forward slash “ // “ and it can locate and search the web element anywhere on the webpage. Relative XPath directly jumps to elements on DOM. The other difference between absolute and relative XPath is that in absolute XPath we use tags or nodes but in relative XPath we use attributes.
Example:
We are writing the relative XPath for the same input field for which earlier we created an absolute XPath.
Relative XPath is:
//input[@name=’username’]
XPath Functions:
It is not always possible to locate a web element using relative XPath that is because at some times while locating a particular web element there is the possibility of elements that have similar properties, for example, the same id, name, or same class name. So, here the basic XPath won’t work efficiently for finding that web element. Xpath functions are used to write the efficient XPath by locating a web element with a unique value. Basically, there are three types of XPath functions as follows,
a. starts-with() Function:
starts-with() function is very useful in locating dynamic web elements. It is used to find the element in which the attribute value starts with some particular character or text.
While working on the dynamic web page the starts-with function plays an important role. We can use it to match the starting value of a web element that remains static.
It can also locate the web element whose attribute value is static.
Just like the start-with() function explained above, the contains() function is also used to create a unique expression to locate a web element.
It is used when if a part of the value of an attribute changes dynamically the function can navigate to the web element with the partial text present.
We can provide any partial attribute value to locate the web element.
It accepts two parameters the first one is the attribute of the tag must validate to locate the web element and the second one is the value of an attribute is a partial value that the attribute must contain.
Syntax:
Xpath = //tagname[contains(@attribute,’value’)]
Example:
//input[contains(@name,’username’)]
c. text() Function:
text() Function:
The text() function is used to locate web elements with exact text matches.
The function only works if the element contains the text.
This method returns the text of the web element when identified by the tag name and compared it with the value provided on the right side.
Syntax:
Xpath = //tagname[text()=’Actual text present’]
Example:
//button[text()=’ Login ‘]
How to use AND & OR in XPath:
AND & OR expressions can also be used in selenium Xpath expressions. Very useful if you want to use more than two attributes to find elements on a webpage.
The OR expression requires two conditions and it will check whether the first condition in the statement is true if so then it will locate that web element and if not then it will go for the second condition and if that is true then also it will locate that web element. So, here the point we should remember is that when we are using the OR expression at least either of two of the conditions should be true then, and then only it will find and locate that web element.
Syntax:
Xpath = //tagname[@attribute=’Value’ or @attribute=’Value’]
Example:
//input[@name=’username’ or @placeholder=’xyz’]
Here the first condition is true and the second one is false still the web element got located.
Just like the OR expression the AND expression also requires two conditions but the catch here is that both the provided condition must be true then and then only the web element will get located. If either of the conditions is false then it will not locate that web element.
Syntax:
Xpath = //tagname[@attribute=’Value’ and @attribute=’Value’]
Example:
//input[@name=’username’ and @placeholder=’Username’]
In this case, both the condition provided for an AND expression is true hence the web element got located.
XPath Axis:
It is a method to identify those dynamic elements that are impossible to find by normal XPath methods. All the elements are in a hierarchical structure and can be either located using absolute or relative Xpaths but it provides specific attributes called XPath axis to locate those elements with unique XPath expressions. The axes show a relationship to the current node and help locate the relative nodes concerning the tree’s current node. The dynamic elements are those elements on the webpage whose attributes dynamically change on refresh or any other operations. The HTML DOM structure contains one or more element nodes and they are known as trees of nodes. If an element contains the content, whether it is other elements or text, it must be declared with a start tag and an end tag. The text defined between the start tag and the end tag is the element content.
Types of XPath Axis:
1. Parent Axis XPath:
With the help of the parent axis XPath, we can select the parent of the current node. Here, the parent node can be either a root node or an element node. The point to consider here is that for all the other element nodes the maximum node the parent axis contains is one. Also, the root node of the HTML DOM structure has no parent hence the parent axis is empty when the current node is the root node.
As we have seen using the parent axis XPath actually we are creating an XPath by the following bottom-up approach but here in the child axis case, we are going to follow the top-down approach to create an XPath. The child axis selects all the child elements present under the current node. We can easily locate a web element as a child of the current node.
This type of XPath uses its own current node and selects the web element belonging to that current node. You will always observe only one node that represents the self-web element. The tag name we provide at the start and at the end of XPath are the same as they are on the self-axis of the current node. However, this provides the confirmation of the element present when there is more than one element present having the same value and attribute.
Using this axis we can select the current node and all its descendants i.e. child, grandchild, etc just like a descendant axis. The point to be noticed here is the tag name for descendants and self are the same.
As we understand how the descendant axis works now, the ancestor axis works exactly opposite to that of the descendant axis. It will select or locate all ancestors elements i.e. parent, grandparent, etc of the current node. This axis contains the root node too.
Using the following sibling axis method we can select all the nodes that have the same parent as that of the current node and that appear after the current node.
Using the following sibling axis method we can select all the nodes that have the same parent as that of the current node and that appear before the current node. It works opposite to that of the following sibling axis XPath.
You can try all of these examples mentioned above with the Orange HRM Demo website here.
Conclusion:
In conclusion, XPath is an essential tool for web automation testing when using Selenium, Playwright, and Cypress. It allows for more flexibility and specificity in locating elements on a web page. Understanding the different types of XPath expressions and how to use them can greatly improve the efficiency and effectiveness of the automation testing process. It can be particularly useful in situations where elements do not have unique CSS selectors, or when the structure of the HTML changes frequently. With the knowledge of XPath, you can write more robust and stable automation tests.
This article provides you with a solution for downloading a file using python and selenium in a folder. Handling files can be a tedious task at times. Especially, when you have test scenarios like downloading a file and verifying if the file is downloaded and if yes then delete the downloaded file.
Despite visiting many websites and reading many articles, I was not able to find the right solution. Here, I am providing all the solutions in one place, as visiting multiple web pages to find a single solution is tiring. Here, we are using the python and selenium combination to download a file in a folder. You can use the language you like for example, java, javascript, c#, etc. After reading this article, you will get to know how you can handle this type of scenario and we will solve this issue together. So just follow the steps described.
Traditional Approach:
When you download any file from the website it generally gets downloaded in your download folder i.e. on your local system, but here, is what we are doing we are creating a folder download in our framework. Then we download that file in this newly created folder.
Till this point, I assume you have understood the test scenario and also we will be passing the file name to delete the particular file. Also, to verify whether the particular file is getting downloaded or not.
It would help if you imported some packages of python and selenium they have listed below.
To change the download folder path from our local system to the framework folder we need to add some script here, that will set the new download folder as our default folder, to download the files from the webpage.
Step1:
Import the following packages.
from selenium import webdriver
import os
From selenium.webdriver.common.by import By
From webdriver_manager.chrome import ChromeDriverManager
After adding the above imports now we will have to change the path to do so see the script and you will get an idea.
Now we have set the download path to our new folder now we have to set the driver.
Step2:
Here, I have used the web driver manager you can use the chrome driver and provide the path if you want to. But, I suggest using web driver manager, as it is a good practice to use. Because it will download all the updated chrome driver versions automatically and you will save lots of your time.
I hope no one has any problems or doubts till this point, as these steps are crucial and if you have any doubts go through the steps again. Now you can launch your webpage URL.
Step3:
Here, I want you to write your script to locate the web element and click on that element for example refer to the following scrip
After clicking, the file will get downloaded in the new download folder that we have created in our framework.
Step4:
The next step is to see if the file is present in the newly created download folder. In order to achieve this just go through the following script.
def download_file_verify(self,filename):
dir_path = "G:/Python/Download/"
res = os.listdir(dir_path)
try:
name = os.path.isfile("Download/" + res[0])
if res[0].__contains__(filename):
print("file downloaded successfully")
except "file is not downloaded":
name = False
return name
Here, you can provide the name of your downloaded file to the filename argument to avoid the hard coding of the script, as it is not a good practice.
Explanation:
For instance, the name of my downloaded file is extent report so now the value of the filenameargument becomes extent report.
So, first of all, it will go to the directory path we have provided now it will store all the file names already present in the folder in a list format.
Here we have stored that list in the res variable. Now we can iterate over the list and verify if our desired file is present in the folder or not.
Take note here, that the newly downloaded file will always be present in the zeroth index of our download folder. That is why we have used res[0] to check, if the downloaded file is present at the zeroth index or not.
Now, it will check if the zeroth index file name is equal to that of the name of the file we have provided. So, if yes then it will print(“file downloaded successfully”), and if not then it will throw an exception and will print(“file is not downloaded”)
Here I have used assertion to verify whether the file is downloaded or not. I will suggest you use the same as it is good practice. You will get to know the assertion while handling the file.
Congratulations, we are done with the first part. We have successfully downloaded the file in the newly created download folder. We have also verified whether the file is downloaded or not.
Step5:
The next task is to delete the downloaded file by passing the name of the file. So, let’s get started then.
Script to delete the file from the download folder by passing the name of the file.
def delete_previous_file(self,filename):
try:
d_path = "G:/Python/Download/"
list = os.listdir(d_path)
for file in list:
print("present file is: " + file)
path = ("Download/" + file)
if file.__contains__(filename):
os.remove(path)
print("Present file is deleted")
except:
pass
Explanation:
Here, we don’t have to only delete the file that is present at the zeroth index. But we have to delete all the files present in the download folder with the same file name. So, that when a new file gets downloaded there will be only one file present.
So the above code will first go to the directory path. Store all the file names present as a list. So, now we have to iterate over that list and see if the same file is present. If yes then we have to delete that file.
Use try and except block. Here, if there is no file present, then our code will not raise any exceptions or will fail.
Congratulations now we have successfully completed the file handling with selenium python.
Output:
If you have any queries comment them down. We will solve that problem together like we just solved one. Also, if you have any suggestions then let me know. I will implement that in our next article. Also, don’t forget to share the article with your friends. Follow our pages on LinkedIn, Instagram, and Facebook. and subscribe to our blog. So, whenever we post some amazing content you will get to know it and, you will not have to wait for it.
In my opinion, validation of file downloading at a particular location is a very easy process. Only, if you have the right solution for reference. In this article, I am sure I have provided the right solution for all your file-downloading problems and validations.