This blog explores how we can use AI capabilities to automate our test case generation tasks for web applications and APIs, focusing on AI-assisted Test Case Generation for Web & API. Before diving into this topic, let’s first understand why automating test case generation is important. But before that, let’s clarify what a test case is: a test case is a set of steps or conditions used by a tester or developer to verify and validate whether a software application meets customer and business requirements. Now that we understand what a test case is, let’s explore why we create them.
What is the need for test case creation?
- To ensure quality: Test cases help identify defects and ensure the software meets requirements.
- To improve efficiency: Well-structured test cases streamline the testing process.
- To facilitate regression testing: You can reuse test cases to verify that new changes haven’t introduced defects.
- To improve communication: Test cases serve as a common language between developers and testers.
- To measure test coverage: Test cases help assess the extent to which the software has been tested.
When it comes to manual test case creation some limitations, disadvantages, or challenges impact the efficiency and effectiveness of the testing process such as:
What are the limitations of manual test case generation?
- Time-Consuming: Manual test case writing is a time-consuming process as each test case requires detailed planning and documentation to ensure the coverage of requirements and expected output.
- Resource Intensive: Creating manual test cases requires significant resources and skilled personnel. Testers must thoroughly understand the application and its related requirements to write effective test cases. This process demands a substantial allocation of human resources, which could be better utilized in other critical areas.
- Human Error: Any task that needs human interactions is prone to error because that is a human tendency and manual test case creation is no exception. Mistakes can occur in documenting the steps, and expected results, or even in understanding the requirements. Which could result in inaccurate test cases that lead to undetected bugs and defects.
- Expertise Dependency: Creating high-quality test cases that cover all the requirements and results into high test coverage requires a certain level of expertise and domain knowledge. This creates a limitation especially if those individuals are unavailable or if there is a high turnover rate.
These are just some of the challenges that I have mentioned but there could be more. Comment down your thoughts on this one. If you have any other challenges then you can share them in the comment section. Now that we have understood why we create a test case and what value it adds in testing along with the limitations for manual test case generation let’s see what are the benefits of automating this test case generation process.
Benefits of automated test case generation:
- Efficiency and Speed: Automated test case generation significantly improves the efficiency and speed of test case writing. As tools and algorithms drive the process instead of manual efforts, it creates test cases faster and quickly updates them whenever there are changes in the application, ensuring that testing keeps pace with development.
- Increased Test Coverage: Automated test case generation eliminates or reduces the chances of compromising the test coverage. This process generates a wide range of test cases, including those that manual testing might overlook. By covering various scenarios, such as edge cases, it ensures thorough testing.
- Accuracy and Consistency: Automating test case generation ensures accurate and consistent creation of test cases every time. This consistency is crucial for maintaining the integrity of the testing process and applying the same standards across all test cases.
- Improved Collaboration: By standardizing the test case generation process, automated test case generation promotes improved collaboration among cross-functional teams. It ensures that all team members, including developers, testers, and business analysts, are on the same page.
Again, these are just a few advantages that I have listed down. You can share more in the comment section and let me know what the limitations of automated test case generation are as well.
Before we move ahead it is essential to understand what is AI and how it works. This understanding of AI will help us to design and build our algorithms and tools to get the desired output.
What is AI?
AI (Artificial Intelligence) simulates human intelligence in machines, programming them to think, learn, and make decisions. AI systems mimic cognitive functions such as learning, reasoning, problem-solving, perception, and language understanding.
How does AI work?
AI applications work based on a combination of algorithms, computational models, and large datasets. We divide this process into several steps as follows.
1. Data Collection and Preparation:
- Data Collection: AI system requires vast amounts of data to learn from. You can collect this data from various sources such as sensors, databases, and user interactions.
- Data Preparation: We clean, organize, and format the collected data to make it suitable for training AI models. This step often involves removing errors, handling missing values, and normalizing the data.
2. Algorithm Selection:
- Machine Learning (ML): Algorithms learn from data and improve over time without explicit programming. Examples include decision trees, support vector machines, and neural networks.
- Deep Learning: A subset of machine learning that uses neural networks with many layers (deep neural networks) to analyze complex patterns in data. It is particularly effective for tasks such as image and speech recognition.
3. Model Training:
- Training: During training, the AI model learns to make predictions or decisions by analyzing the training data. The model adjusts its parameters to minimize errors and improve accuracy.
- Validation: We test the model on a separate validation dataset to evaluate its performance and fine-tune its parameters.
4. Model Deployment:
Once the team trains and validates the AI model, they deploy it to perform its intended tasks in a real-world environment. This could involve making predictions, classifying data, or automating processes.
5. Inference and Decision-Making:
Inference is the process of using the trained AI model to make decisions or predictions based on new, unseen data. The AI system applies the learned patterns and knowledge to provide outputs or take actions.
6. Feedback and Iteration:
AI systems continuously improve through feedback loops. By analyzing the outcomes of their decisions and learning from new data, AI models can refine their performance over time. This iterative process helps in adapting to changing environments and evolving requirements.
Note: We are using Open AI to automate the test case generation process. For this, you need to create an API key for your Open AI account. Check this Open AI API page for more details.
Automated Test Case Generation for Web:
Prerequisite:
- Open AI account and API key
- Node.js installed on the system
Approach:
For web test case generation using AI the approach I have followed is to scan the DOM structure of the web page analyze the tag and attribute present and then use this as input data to generate the test case.
Step 1: Web Scrapping
Web scrapping will provide us the DOM structure information of the web page. We will store this and then pass this to the next process which is analyzing this stored DOM structure.
const puppeteer = require('puppeteer');
async function scrapeWebPage(url) {
const browser = await puppeteer.launch({ headless: true, devtools: true });
const page = await browser.newPage();
await page.goto(url);
const elements = await page.evaluate(() => {
const tags = document.querySelectorAll('*');
const structure = [];
tags.forEach(tag => {
const tagName = tag.tagName.toLowerCase();
const attributes = tag.outerHTML;
structure.push({ tagName, attributes });
});
return structure;
});
await browser.close();
console.log("elements are: "+ elements);
return elements;
}
module.exports = scrapeWebPage;
Code Explanation:
- Install Puppeteer npm package using npm i puppeteer We are using Puppeteer to launch the browser and visit the web page.
- Next, we have an async function scrapeWebPage This function requires the web URL. Once you pass the web URL then it stores the tags and attributes from the DOM content.
- This function will return the structure and at last will return the web elements.
Step 2: Analyze elements
In this step, we are analyzing the elements that we got from our first step and based on that we will define what action to take on those elements.
function analyzePageStructure(pageStructure) {
const actions = [];
pageStructure.forEach(element => {
const { tagName, attributes } = element;
if (tagName === 'input' && (attributes.includes('type="text"') || attributes.includes('type="password"'))) {
actions.push(`Fill in the ${tagName} field`);
} else if (tagName === 'button' && attributes.includes('type="submit"')) {
actions.push('Click the submit button');
}
});
console.log("Actions are: ", actions);
return actions;
}
module.exports = analyzePageStructure;
Code Explanation:
- Here the function analyzePageStructure takes pageStrucure as a parameter, which is nothing but the elements that we got using web scraping.
- We are declaring the action array here to store all the actions that we will define to perform.
- In this particular code, I am only considering two types i.e. text and submit and tagNames i.e. input and button.
- For type text and tag name input, I am adding an action to enter the data.
- For type submit and tag name submit I am adding an action to click.
- At last, this function will return the actions array.
Step 3: Generate Test Cases
This is the last step of this approach. Till here we have our actions and the elements as well. Now, we are ready to generate the test cases for the entered web page.
const axios = require('axios');
async function generateBddTestCases(actions, apiKey) {
const prompt = `
Generate BDD test cases using Gherkin syntax for the following login page actions: ${actions.join(', ')}. Include test cases for:
1. Functional Testing: Verify each function of the software application.
2. Boundary Testing: Test boundaries between partitions.
3. Equivalence Partitioning: Divide input data into valid and invalid partitions.
4. Error Guessing: Anticipate errors based on experience.
5. Performance Testing: Ensure the software performs well under expected workloads.
6. Security Testing: Identify vulnerabilities in the system.
`;
const headers = {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
};
const data = {
model: 'gpt-3.5-turbo',
prompt,
max_tokens: 1000,
n: 1,
stop: ['\n'],
};
try {
const response = await axios.post('https://api.openai.com/v1/completions', data, { headers });
return response.data.choices[0].text.trim();
} catch (error) {
console.error('Error generating test cases:', error.response ? error.response.data : error.message);
return null;
}
}
module.exports = generateBddTestCases;
Code Explanation:
- The function generateBddTestCases takes two parameters actions and apikey (Open AI API key)
- We have added a prompt along with the actions and API key to generate the test cases.
- The API used in the above code is provided by Open AI.
Output:
Feature: Login functionality
# Functional Testing
Scenario: Successful login with valid credentials
Given the user is on the login page
When the user fills in the username field with “user123”
And the user fills in the password field with “password123”
And the user clicks the submit button
Then the user should be redirected to the dashboard
Scenario: Unsuccessful login with invalid credentials
Given the user is on the login page
When the user fills in the username field with “invalidUser”
And the user fills in the password field with “invalidPass”
And the user clicks the submit button
Then an error message “Invalid username or password” should be displayed
# Boundary Testing
Scenario: Login with username at boundary length
Given the user is on the login page
When the user fills in the username field with “user12345678901234567890” # Assume max length is 20
And the user fills in the password field with “password123”
And the user clicks the submit button
Then the user should be redirected to the dashboard
Scenario: Login with password at boundary length
Given the user is on the login page
When the user fills in the username field with “user123”
And the user fills in the password field with “password12345678901234567890” # Assume max length is 20
And the user clicks the submit button
Then the user should be redirected to the dashboard
# Equivalence Partitioning
Scenario: Login with invalid username partition
Given the user is on the login page
When the user fills in the username field with “invalidUser!”
And the user fills in the password field with “password123”
And the user clicks the submit button
Then an error message “Invalid username format” should be displayed
Scenario: Login with invalid password partition
Given the user is on the login page
When the user fills in the username field with “user123”
And the user fills in the password field with “short”
And the user clicks the submit button
Then an error message “Password is too short” should be displayed
# Error Guessing
Scenario: Login with SQL injection attempt
Given the user is on the login page
When the user fills in the username field with “admin’–“
And the user fills in the password field with “password123”
And the user clicks the submit button
Then an error message “Invalid username or password” should be displayed
And no unauthorized access should be granted
# Performance Testing
Scenario: Login under load
Given the system is under heavy load
When the user fills in the username field with “user123”
And the user fills in the password field with “password123”
And the user clicks the submit button
Then the login should succeed within acceptable response time
# Security Testing
Scenario: Login with XSS attack
Given the user is on the login page
When the user fills in the username field with “<script>alert(‘XSS’)</script>”
And the user fills in the password field with “password123”
And the user clicks the submit button
Then an error message “Invalid username format” should be displayed
And no script should be executed
Automated Test Case Generation for API:
Approach:
To effectively achieve AI Test Case Generation for APIs, we start by passing the endpoint and the URI. Subsequently, we attach files containing the payload and the expected response. With these parameters in place, we can then leverage AI, specifically OpenAI, to generate the necessary test cases for the API.
Step 1: Storing the payload and expected response json files in the resources folder
- We are going to use the POST API for this and for POST APIs we need payload.
- The payload is passed through json file stored in the resources folder.
- We also need to pass the expected response of this POST API so that we can create effective test cases.
- The expected response json file will help us to create multiple test case to ensure maximum test coverage.
Step 2: Generate Test Cases
In this step, we will use the stored payload, and expected response json files along with the API endpoint.
const fs = require('fs');
const axios = require('axios');
// Step 1: Read JSON files
const readJsonFile = (filePath) => {
try {
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
} catch (error) {
console.error(`Error reading JSON file at ${filePath}:`, error);
throw error;
}
};
const payloadPath = 'path_of_payload.json';
const expectedResultPath = 'path_of_expected_result.json';
const payload = readJsonFile(payloadPath);
const expectedResult = readJsonFile(expectedResultPath);
console.log("Payload:", payload);
console.log("Expected Result:", expectedResult);
// Step 2: Generate BDD Test Cases
const apiKey = 'your_api_key';
const apiUrl = 'https://reqres.in';
const endpoint = '/api/login';
const callType = 'POST';
const generateApiTestCases = async (apiUrl, endpoint, callType, payload, expectedResult, retries = 3) => {
const prompt = `
Generate BDD test cases using Gherkin syntax for the following API:
URL: ${apiUrl}${endpoint}
Call Type: ${callType}
Payload: ${JSON.stringify(payload)}
Expected Result: ${JSON.stringify(expectedResult)}
Include test cases for:
1. Functional Testing: Verify each function of the API.
2. Boundary Testing: Test boundaries for input values.
3. Equivalence Partitioning: Divide input data into valid and invalid partitions.
4. Error Guessing: Anticipate errors based on experience.
5. Performance Testing: Ensure the API performs well under expected workloads.
6. Security Testing: Identify vulnerabilities in the API.
`;
try {
const response = await axios.post('https://api.openai.com/v1/completions', {
model: 'gpt-3.5-turbo',
prompt: prompt,
max_tokens: 1000,
n: 1,
stop: ['\n'],
}, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
});
const bddTestCases = response.data.choices[0].text.trim();
// Check if bddTestCases is a valid string before writing to file
if (typeof bddTestCases === 'string') {
fs.writeFileSync('apiTestCases.txt', bddTestCases);
console.log("BDD test cases written to apiTestCases.txt");
} else {
throw new Error('Invalid data received for BDD test cases');
}
} catch (error) {
if (error.response && error.response.status === 429 && retries > 0) {
console.log('Rate limit exceeded, retrying...');
await new Promise(resolve => setTimeout(resolve, 2000)); // Wait for 2 seconds before retrying
return generateApiTestCases(apiUrl, endpoint, callType, payload, expectedResult, retries - 1);
} else {
console.error('Error generating test cases:', error.response ? error.response.data : error.message);
throw error;
}
}
};
generateApiTestCases(apiUrl, endpoint, callType, payload, expectedResult)
.catch(error => console.error('Error generating test cases:', error));
Code Explanation:
- Firstly we are reading the two json files from the resources folder i.e. payload.json and expected_result.json
- Next, use your API key, specify the API URL and endpoint along with callType
- Write a prompt for generating the test cases.
- Use the same Open AI API to generate the test cases.
Output:
Feature: Login API functionality
# Functional Testing
Scenario: Successful login with valid credentials
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres.in”,
“password”: “cityslicka”
}
“””
Then the response status should be 200
And the response should be:
“””
{
“token”: “QpwL5tke4Pnpja7X4”
}
“””
Scenario: Unsuccessful login with missing password
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres.in”
}
“””
Then the response status should be 400
And the response should be:
“””
{
“error”: “Missing password”
}
“””
Scenario: Unsuccessful login with missing email
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“password”: “cityslicka”
}
“””
Then the response status should be 400
And the response should be:
“””
{
“error”: “Missing email”
}
“””
# Boundary Testing
Scenario: Login with email at boundary length
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres.in.this.is.a.very.long.email.address”,
“password”: “cityslicka”
}
“””
Then the response status should be 200
And the response should be:
“””
{
“token”: “QpwL5tke4Pnpja7X4”
}
“””
Scenario: Login with password at boundary length
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres.in”,
“password”: “thisisaverylongpasswordthatexceedstypicallength”
}
“””
Then the response status should be 200
And the response should be:
“””
{
“token”: “QpwL5tke4Pnpja7X4”
}
“””
# Equivalence Partitioning
Scenario: Login with invalid email format
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres”,
“password”: “cityslicka”
}
“””
Then the response status should be 400
And the response should be:
“””
{
“error”: “Invalid email format”
}
“””
Scenario: Login with invalid password partition
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres.in”,
“password”: “short”
}
“””
Then the response status should be 400
And the response should be:
“””
{
“error”: “Password is too short”
}
“””
# Error Guessing
Scenario: Login with SQL injection attempt
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “admin’–“,
“password”: “cityslicka”
}
“””
Then the response status should be 401
And the response should be:
“””
{
“error”: “Invalid email or password”
}
“””
And no unauthorized access should be granted
# Performance Testing
Scenario: Login under load
Given the API endpoint is “https://reqres.in/api/login”
When the system is under heavy load
And a POST request is made with payload:
“””
{
“email”: “eve.holt@reqres.in”,
“password”: “cityslicka”
}
“””
Then the response status should be 200
And the login should succeed within acceptable response time
# Security Testing
Scenario: Login with XSS attack in email
Given the API endpoint is “https://reqres.in/api/login”
When a POST request is made with payload:
“””
{
“email”: “<script>alert(‘XSS’)</script>”,
“password”: “cityslicka”
}
“””
Then the response status should be 400
And the response should be:
“””
{
“error”: “Invalid email format”
}
“””
And no script should be executed
Conclusion:
Automating test case generation using AI capabilities will help to ensure total test coverage. It will also enhance the process by addressing the limitations mentioned above of manual test case creation. The use of AI tools like Open AI significantly improves efficiency, increases test coverage, ensures accuracy, and promotes consistency.
The code implementation shared in this blog demonstrates a practical way to leverage OpenAI for automating AI Test Case Generation. I hope you find this information useful and encourage you to explore the benefits of AI in your testing processes. Feel free to share your thoughts and any additional challenges in the comments. Happy testing!
Click here for more blogs on software testing and test automation.
3Top-Tier SDET | Advanced in Manual & Automated Testing | Skilled in Full-Spectrum Testing & CI/CD | API & Mobile Automation | Desktop App Automation | ISTQB Certified