KPIs for Test Automation are measurable criteria that demonstrate how effectively the automation testing process supports the organization’s objectives. These metrics assess the success of automation efforts and specific activities within the testing domain. KPIs for test automation are crucial for monitoring progress toward quality goals, evaluating testing efficiency over time, and guiding decisions based on data-driven insights. They encompass metrics tailored to ensure thorough testing coverage, defect detection rates, testing cycle times, and other critical aspects of testing effectiveness.
Importance of KPIs
Performance Measurement: Key performance indicators (KPIs) offer measurable metrics to gauge the performance and effectiveness of automated testing efforts. They monitor parameters such as test execution times, test coverage, and defect detection rates, providing insights into the overall efficacy of the testing process KPIs will help your team improve testing skills
Identifying Challenges and Problems: Key performance indicators (KPIs) assist in pinpointing bottlenecks or challenges within the test automation framework. By monitoring metrics such as test error rates, script consistency, and resource allocation, KPIs illuminate areas needing focus or enhancement to improve the dependability and scalability of automated testing.
Optimizing Resource Utilization: Key performance indicators (KPIs) facilitate improved allocation of resources by pinpointing areas where automated efforts are highly effective and where manual intervention might be required. This strategic optimization aids in maximizing the utilization of testing resources and minimizing costs associated with testing activities.
Facilitating Ongoing Enhancement: Key performance indicators (KPIs) support continual improvement by establishing benchmarks and objectives for testing teams. They motivate teams to pursue elevated standards in automation scope, precision, and dependability, fostering a culture of perpetual learning and refinement of testing proficiency.
Benefits of KPIs:
Test Coverage clear objective: KPIs will help an unbiased view of the effectiveness of automation testing you with the help
Process Enhancement: KPIs highlight the areas for improvement while doing automation testing processes. So you can achieve continuous enhancement & efficiency.
Executive Insight: Sharing KPIs with the team will have transparency & a better understanding of what test automation can achieve
Process Tracking: Regular monitoring of KPIs tracks the status and progress of automated testing, ensuring alignment with goals and timelines.
KPIs For Test Automation:
1. Test Coverage:
Description: Test coverage refers to the proportion of your application code that is tested. It ensures that your automated testing encompasses all key features and functions. Achieving high test coverage is crucial for reducing the risk of defects reaching production and can also reduce manual efforts.
Examples of Measurements:
Requirements Traceability Matrix (RTM): Maps test cases to requirements to ensure that all requirements are covered by tests.
User Story Coverage: Measures the percentage of user stories that have been tested.
Description: This performance metric gauges the time required to run a test suite. Effective automation testing, indicated by shorter execution times, is critical for the deployment of software in a DevOps setting. Efficient test execution supports seamless continuous integration and continuous delivery (CI/CD) workflows, ensuring prompt software releases and updates.
Examples of Measurements:
Total Test Execution Time: Total time taken to execute all test cases in a test suite.
Average Execution Time per Test Case: Average time taken to execute an individual test case.
Description: This metric in automation measures the percentage of test cases that fail during a specific build or over a set period. It is determined by dividing the number of failed tests by the total number of tests executed and multiplying the result by 100 to express it as a percentage. Tracking this rate helps identify problematic areas in the code or test environment, facilitating timely fixes and enhancing overall software quality. Maintaining a low failure rate is essential for ensuring the stability and reliability of the application throughout the testing lifecycle.
Examples of Measurements:
Failure Rate Per Build: Percentage of test cases that fail in each build.
Historical Failure Trends: Trends in test failure rates over time.
Description: Active defects represent the present state of issues, encompassing new, open, or resolved defects, guiding the team in determining appropriate resolutions. The team sets a threshold for monitoring these defects, taking immediate action on those that surpass this limit.
Examples of Measurements:
Defect Count: Number of active defects at any given time.
Defect Aging: Time taken to resolve defects from the time they were identified.
Tools to Measure Active Defects:
Defect Tracking Tools: Jira, Bugzilla, HP ALM
Test Management Tools: TestRail, Zephyr, QTest
5. Build Stability:
Description: Build stability in automation helps measure the reliability and consistency of application builds. You can check how frequently builds pass or fail during automation. Monitoring build stability helps your team identify failures early, and maintaining build stability is necessary for continuous delivery (CI/CD) workflows.
Examples of Measurements:
Pass/Fail Rate: Percentage of builds that pass versus those that fail.
Mean Time to Recovery (MTTR): Average time taken to fix a failed build.
Description: Defect density measures the number of defects found in a module or piece of code per unit size (e.g., lines of code, function points). It helps in identifying areas of the code that are more prone to defects.
Examples of Measurements:
Defects per KLOC (Thousand Lines of Code): Number of defects found per thousand lines of code.
Defects per Function Point: Number of defects found per function point.
Description: Test case effectiveness measures how well the test cases are able to detect defects. It is calculated by the number of defects detected divided by the total number of defects.
Examples of Measurements:
Defects Detected by Tests: Number of defects detected by automated tests.
Total Defects: Total number of defects detected including those found in production.
Tools to Measure Test Case Effectiveness:
Test Management Tools: TestRail, Zephyr, QTest
Defect Tracking Tools: Jira, Bugzilla, HP ALM
8. Test Automation ROI (Return on Investment):
Description: This KPI measures the financial benefit gained from automation versus the cost incurred to implement and maintain it. It helps in justifying the investment in test automation.
Examples of Measurements:
Cost Savings from Reduced Manual Testing: Savings from reduced manual testing efforts.
Automation Implementation Costs: Costs incurred in implementing and maintaining automation.
Tools to Measure Test Automation ROI:
Project Management Tools: MS Project, Smartsheet, Asana
Test Management Tools: TestRail, Zephyr, QTest
9. Test Case Reusability:
Description: This metric measures the extent to which test cases can be reused across different projects or modules. Higher reusability indicates efficient and modular test case design.
Examples of Measurements:
Reusable Test Cases: Number of test cases reused in multiple projects.
Total Test Cases: Total number of test cases created.
Description: Defect leakage measures the number of defects that escape to production after testing. Lower defect leakage indicates more effective testing.
Examples of Measurements:
Defects Found in Production: Number of defects found in production.
Total Defects Found During Testing: Total number of defects found during testing phases.
Description: This KPI measures the effort required to maintain and update automated tests. Lower maintenance effort indicates more robust and adaptable test scripts.
Examples of Measurements:
Time Spent on Test Maintenance: Total time spent on maintaining and updating test scripts.
Number of Test Scripts Updated: Number of test scripts that required updates.
Tools to Measure Automation Test Maintenance Effort:
Key Performance Indicators (KPIs) are crucial for ensuring the quality and reliability of applications. Metrics like test coverage, test execution time, test failure rate, active defects, and build stability offer valuable insights into the testing process. By following these KPIs, teams can detect defects early and uphold high software quality standards. Implementing and monitoring these metrics supports effective development cycles and facilitates seamless integration and delivery in CI/CD workflows.
Click here for more blogs on software testing and test automation.
As a Junior SDET with 2 years of hands-on experience, I specialize in both manual and automation testing for web and mobile applications. I have worked with a variety of technologies, including Selenium, Playwright, Cucumber, Appium, SQL, Java, JavaScript, and Python, to deliver comprehensive test solutions. My expertise covers both functional and regression testing, with a focus on ensuring quality across different platforms.
This blog explores how we can use AI capabilities to automate our test case generation tasks for web applications and APIs, focusing on AI-assisted Test Case Generation for Web & API. Before diving into this topic, let’s first understand why automating test case generation is important. But before that, let’s clarify what a test case is: a test case is a set of steps or conditions used by a tester or developer to verify and validate whether a software application meets customer and business requirements. Now that we understand what a test case is, let’s explore why we create them.
What is the need for test case creation?
To ensure quality: Test cases help identify defects and ensure the software meets requirements.
To improve efficiency: Well-structured test cases streamline the testing process.
To facilitate regression testing: You can reuse test cases to verify that new changes haven’t introduced defects.
To improve communication: Test cases serve as a common language between developers and testers.
To measure test coverage: Test cases help assess the extent to which the software has been tested.
When it comes to manual test case creation some limitations, disadvantages, or challenges impact the efficiency and effectiveness of the testing process such as:
What are the limitations of manual test case generation?
Time-Consuming: Manual test case writing is a time-consuming process as each test case requires detailed planning and documentation to ensure the coverage of requirements and expected output.
Resource Intensive: Creating manual test cases requires significant resources and skilled personnel. Testers must thoroughly understand the application and its related requirements to write effective test cases. This process demands a substantial allocation of human resources, which could be better utilized in other critical areas.
Human Error: Any task that needs human interactions is prone to error because that is a human tendency and manual test case creation is no exception. Mistakes can occur in documenting the steps, and expected results, or even in understanding the requirements. Which could result in inaccurate test cases that lead to undetected bugs and defects.
Expertise Dependency: Creating high-quality test cases that cover all the requirements and results into high test coverage requires a certain level of expertise and domain knowledge. This creates a limitation especially if those individuals are unavailable or if there is a high turnover rate.
These are just some of the challenges that I have mentioned but there could be more. Comment down your thoughts on this one. If you have any other challenges then you can share them in the comment section. Now that we have understood why we create a test case and what value it adds in testing along with the limitations for manual test case generation let’s see what are the benefits of automating this test case generation process.
Benefits of automated test case generation:
Efficiency and Speed: Automated test case generation significantly improves the efficiency and speed of test case writing. As tools and algorithms drive the process instead of manual efforts, it creates test cases faster and quickly updates them whenever there are changes in the application, ensuring that testing keeps pace with development.
Increased Test Coverage: Automated test case generation eliminates or reduces the chances of compromising the test coverage. This process generates a wide range of test cases, including those that manual testing might overlook. By covering various scenarios, such as edge cases, it ensures thorough testing.
Accuracy and Consistency: Automating test case generation ensures accurate and consistent creation of test cases every time. This consistency is crucial for maintaining the integrity of the testing process and applying the same standards across all test cases.
Improved Collaboration: By standardizing the test case generation process, automated test case generation promotes improved collaboration among cross-functional teams. It ensures that all team members, including developers, testers, and business analysts, are on the same page.
Again, these are just a few advantages that I have listed down. You can share more in the comment section and let me know what the limitations of automated test case generation are as well.
Before we move ahead it is essential to understand what is AI and how it works. This understanding of AI will help us to design and build our algorithms and tools to get the desired output.
What is AI?
AI (Artificial Intelligence) simulates human intelligence in machines, programming them to think, learn, and make decisions. AI systems mimic cognitive functions such as learning, reasoning, problem-solving, perception, and language understanding.
How does AI work?
AI applications work based on a combination of algorithms, computational models, and large datasets. We divide this process into several steps as follows.
1. Data Collection and Preparation:
Data Collection: AI system requires vast amounts of data to learn from. You can collect this data from various sources such as sensors, databases, and user interactions.
Data Preparation: We clean, organize, and format the collected data to make it suitable for training AI models. This step often involves removing errors, handling missing values, and normalizing the data.
2. Algorithm Selection:
Machine Learning (ML): Algorithms learn from data and improve over time without explicit programming. Examples include decision trees, support vector machines, and neural networks.
Deep Learning: A subset of machine learning that uses neural networks with many layers (deep neural networks) to analyze complex patterns in data. It is particularly effective for tasks such as image and speech recognition.
3. Model Training:
Training: During training, the AI model learns to make predictions or decisions by analyzing the training data. The model adjusts its parameters to minimize errors and improve accuracy.
Validation: We test the model on a separate validation dataset to evaluate its performance and fine-tune its parameters.
4. Model Deployment:
Once the team trains and validates the AI model, they deploy it to perform its intended tasks in a real-world environment. This could involve making predictions, classifying data, or automating processes.
5. Inference and Decision-Making:
Inference is the process of using the trained AI model to make decisions or predictions based on new, unseen data. The AI system applies the learned patterns and knowledge to provide outputs or take actions.
6. Feedback and Iteration:
AI systems continuously improve through feedback loops. By analyzing the outcomes of their decisions and learning from new data, AI models can refine their performance over time. This iterative process helps in adapting to changing environments and evolving requirements.
Note: We are using Open AI to automate the test case generation process. For this, you need to create an API key for your Open AI account. Check this Open AI API page for more details.
Automated Test Case Generation for Web:
Prerequisite:
Open AI account and API key
Node.js installed on the system
Approach:
For web test case generation using AI the approach I have followed is to scan the DOM structure of the web page analyze the tag and attribute present and then use this as input data to generate the test case.
Step 1: Web Scrapping
Web scrapping will provide us the DOM structure information of the web page. We will store this and then pass this to the next process which is analyzing this stored DOM structure.
Install Puppeteer npm package using npm i puppeteer We are using Puppeteer to launch the browser and visit the web page.
Next, we have an async function scrapeWebPage This function requires the web URL. Once you pass the web URL then it stores the tags and attributes from the DOM content.
This function will return the structure and at last will return the web elements.
Step 2: Analyze elements
In this step, we are analyzing the elements that we got from our first step and based on that we will define what action to take on those elements.
function analyzePageStructure(pageStructure) {
const actions = [];
pageStructure.forEach(element => {
const { tagName, attributes } = element;
if (tagName === 'input' && (attributes.includes('type="text"') || attributes.includes('type="password"'))) {
actions.push(`Fill in the ${tagName} field`);
} else if (tagName === 'button' && attributes.includes('type="submit"')) {
actions.push('Click the submit button');
}
});
console.log("Actions are: ", actions);
return actions;
}
module.exports = analyzePageStructure;
Code Explanation:
Here the function analyzePageStructure takes pageStrucure as a parameter, which is nothing but the elements that we got using web scraping.
We are declaring the action array here to store all the actions that we will define to perform.
In this particular code, I am only considering two types i.e. text and submit and tagNames i.e. input and button.
For type text and tag name input, I am adding an action to enter the data.
For type submit and tag name submit I am adding an action to click.
At last, this function will return the actions array.
Step 3: Generate Test Cases
This is the last step of this approach. Till here we have our actions and the elements as well. Now, we are ready to generate the test cases for the entered web page.
const axios = require('axios');
async function generateBddTestCases(actions, apiKey) {
const prompt = `
Generate BDD test cases using Gherkin syntax for the following login page actions: ${actions.join(', ')}. Include test cases for:
1. Functional Testing: Verify each function of the software application.
2. Boundary Testing: Test boundaries between partitions.
3. Equivalence Partitioning: Divide input data into valid and invalid partitions.
4. Error Guessing: Anticipate errors based on experience.
5. Performance Testing: Ensure the software performs well under expected workloads.
6. Security Testing: Identify vulnerabilities in the system.
`;
const headers = {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
};
const data = {
model: 'gpt-3.5-turbo',
prompt,
max_tokens: 1000,
n: 1,
stop: ['\n'],
};
try {
const response = await axios.post('https://api.openai.com/v1/completions', data, { headers });
return response.data.choices[0].text.trim();
} catch (error) {
console.error('Error generating test cases:', error.response ? error.response.data : error.message);
return null;
}
}
module.exports = generateBddTestCases;
Code Explanation:
The function generateBddTestCases takes two parameters actions and apikey (Open AI API key)
We have added a prompt along with the actions and API key to generate the test cases.
The API used in the above code is provided by Open AI.
Output:
Feature: Login functionality
# Functional Testing
Scenario: Successful login with valid credentials Given the user is on the login page When the user fills in the username field with “user123” And the user fills in the password field with “password123” And the user clicks the submit button Then the user should be redirected to the dashboard
Scenario: Unsuccessful login with invalid credentials Given the user is on the login page When the user fills in the username field with “invalidUser” And the user fills in the password field with “invalidPass” And the user clicks the submit button Then an error message “Invalid username or password” should be displayed
# Boundary Testing
Scenario: Login with username at boundary length Given the user is on the login page When the user fills in the username field with “user12345678901234567890” # Assume max length is 20 And the user fills in the password field with “password123” And the user clicks the submit button Then the user should be redirected to the dashboard
Scenario: Login with password at boundary length Given the user is on the login page When the user fills in the username field with “user123” And the user fills in the password field with “password12345678901234567890” # Assume max length is 20 And the user clicks the submit button Then the user should be redirected to the dashboard
# Equivalence Partitioning
Scenario: Login with invalid username partition Given the user is on the login page When the user fills in the username field with “invalidUser!” And the user fills in the password field with “password123” And the user clicks the submit button Then an error message “Invalid username format” should be displayed
Scenario: Login with invalid password partition Given the user is on the login page When the user fills in the username field with “user123” And the user fills in the password field with “short” And the user clicks the submit button Then an error message “Password is too short” should be displayed
# Error Guessing
Scenario: Login with SQL injection attempt Given the user is on the login page When the user fills in the username field with “admin’–“ And the user fills in the password field with “password123” And the user clicks the submit button Then an error message “Invalid username or password” should be displayed And no unauthorized access should be granted
# Performance Testing
Scenario: Login under load Given the system is under heavy load When the user fills in the username field with “user123” And the user fills in the password field with “password123” And the user clicks the submit button Then the login should succeed within acceptable response time
# Security Testing
Scenario: Login with XSS attack Given the user is on the login page When the user fills in the username field with “<script>alert(‘XSS’)</script>” And the user fills in the password field with “password123” And the user clicks the submit button Then an error message “Invalid username format” should be displayed And no script should be executed
Automated Test Case Generation for API:
Approach:
To effectively achieve AI Test Case Generation for APIs, we start by passing the endpoint and the URI. Subsequently, we attach files containing the payload and the expected response. With these parameters in place, we can then leverage AI, specifically OpenAI, to generate the necessary test cases for the API.
Step 1: Storing the payload and expected response json files in the resources folder
We are going to use the POST API for this and for POST APIs we need payload.
The payload is passed through json file stored in the resources folder.
We also need to pass the expected response of this POST API so that we can create effective test cases.
The expected response json file will help us to create multiple test case to ensure maximum test coverage.
Step 2: Generate Test Cases
In this step, we will use the stored payload, and expected response json files along with the API endpoint.
const fs = require('fs');
const axios = require('axios');
// Step 1: Read JSON files
const readJsonFile = (filePath) => {
try {
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
} catch (error) {
console.error(`Error reading JSON file at ${filePath}:`, error);
throw error;
}
};
const payloadPath = 'path_of_payload.json';
const expectedResultPath = 'path_of_expected_result.json';
const payload = readJsonFile(payloadPath);
const expectedResult = readJsonFile(expectedResultPath);
console.log("Payload:", payload);
console.log("Expected Result:", expectedResult);
// Step 2: Generate BDD Test Cases
const apiKey = 'your_api_key';
const apiUrl = 'https://reqres.in';
const endpoint = '/api/login';
const callType = 'POST';
const generateApiTestCases = async (apiUrl, endpoint, callType, payload, expectedResult, retries = 3) => {
const prompt = `
Generate BDD test cases using Gherkin syntax for the following API:
URL: ${apiUrl}${endpoint}
Call Type: ${callType}
Payload: ${JSON.stringify(payload)}
Expected Result: ${JSON.stringify(expectedResult)}
Include test cases for:
1. Functional Testing: Verify each function of the API.
2. Boundary Testing: Test boundaries for input values.
3. Equivalence Partitioning: Divide input data into valid and invalid partitions.
4. Error Guessing: Anticipate errors based on experience.
5. Performance Testing: Ensure the API performs well under expected workloads.
6. Security Testing: Identify vulnerabilities in the API.
`;
try {
const response = await axios.post('https://api.openai.com/v1/completions', {
model: 'gpt-3.5-turbo',
prompt: prompt,
max_tokens: 1000,
n: 1,
stop: ['\n'],
}, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
});
const bddTestCases = response.data.choices[0].text.trim();
// Check if bddTestCases is a valid string before writing to file
if (typeof bddTestCases === 'string') {
fs.writeFileSync('apiTestCases.txt', bddTestCases);
console.log("BDD test cases written to apiTestCases.txt");
} else {
throw new Error('Invalid data received for BDD test cases');
}
} catch (error) {
if (error.response && error.response.status === 429 && retries > 0) {
console.log('Rate limit exceeded, retrying...');
await new Promise(resolve => setTimeout(resolve, 2000)); // Wait for 2 seconds before retrying
return generateApiTestCases(apiUrl, endpoint, callType, payload, expectedResult, retries - 1);
} else {
console.error('Error generating test cases:', error.response ? error.response.data : error.message);
throw error;
}
}
};
generateApiTestCases(apiUrl, endpoint, callType, payload, expectedResult)
.catch(error => console.error('Error generating test cases:', error));
Code Explanation:
Firstly we are reading the two json files from the resources folder i.e. payload.json and expected_result.json
Next, use your API key, specify the API URL and endpoint along with callType
Write a prompt for generating the test cases.
Use the same Open AI API to generate the test cases.
Output:
Feature: Login API functionality
# Functional Testing
Scenario: Successful login with valid credentials Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “eve.holt@reqres.in”, “password”: “cityslicka” } “”” Then the response status should be 200 And the response should be: “”” { “token”: “QpwL5tke4Pnpja7X4” } “””
Scenario: Unsuccessful login with missing password Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “eve.holt@reqres.in” } “”” Then the response status should be 400 And the response should be: “”” { “error”: “Missing password” } “””
Scenario: Unsuccessful login with missing email Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “password”: “cityslicka” } “”” Then the response status should be 400 And the response should be: “”” { “error”: “Missing email” } “””
# Boundary Testing
Scenario: Login with email at boundary length Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “eve.holt@reqres.in.this.is.a.very.long.email.address”, “password”: “cityslicka” } “”” Then the response status should be 200 And the response should be: “”” { “token”: “QpwL5tke4Pnpja7X4” } “””
Scenario: Login with password at boundary length Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “eve.holt@reqres.in”, “password”: “thisisaverylongpasswordthatexceedstypicallength” } “”” Then the response status should be 200 And the response should be: “”” { “token”: “QpwL5tke4Pnpja7X4” } “””
# Equivalence Partitioning
Scenario: Login with invalid email format Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “eve.holt@reqres”, “password”: “cityslicka” } “”” Then the response status should be 400 And the response should be: “”” { “error”: “Invalid email format” } “””
Scenario: Login with invalid password partition Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “eve.holt@reqres.in”, “password”: “short” } “”” Then the response status should be 400 And the response should be: “”” { “error”: “Password is too short” } “””
# Error Guessing
Scenario: Login with SQL injection attempt Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “admin’–“, “password”: “cityslicka” } “”” Then the response status should be 401 And the response should be: “”” { “error”: “Invalid email or password” } “”” And no unauthorized access should be granted
# Performance Testing
Scenario: Login under load Given the API endpoint is “https://reqres.in/api/login” When the system is under heavy load And a POST request is made with payload:
“”” { “email”: “eve.holt@reqres.in”, “password”: “cityslicka” } “”” Then the response status should be 200 And the login should succeed within acceptable response time
# Security Testing
Scenario: Login with XSS attack in email Given the API endpoint is “https://reqres.in/api/login” When a POST request is made with payload:
“”” { “email”: “<script>alert(‘XSS’)</script>”, “password”: “cityslicka” } “”” Then the response status should be 400 And the response should be: “”” { “error”: “Invalid email format” } “”” And no script should be executed
Conclusion:
Automating test case generation using AI capabilities will help to ensure total test coverage. It will also enhance the process by addressing the limitations mentioned above of manual test case creation. The use of AI tools like Open AI significantly improves efficiency, increases test coverage, ensures accuracy, and promotes consistency.
The code implementation shared in this blog demonstrates a practical way to leverage OpenAI for automating AI Test Case Generation. I hope you find this information useful and encourage you to explore the benefits of AI in your testing processes. Feel free to share your thoughts and any additional challenges in the comments. Happy testing!
Click here for more blogs on software testing and test automation.
ADB (Android Debug Bridge) is an invaluable tool for developers, testers and Android enthusiasts alike. It allows you to interact with your Android device from a computer, opening new possibilities for application development and usability improvement. Explore ADB commands for android device manipulation in this blog.
Android enthusiasts, developers and testers find ADB to be of great utility. It provides several features essential for managing devices and developing apps.
ADB uses a command-line interface (CLI) to function. This implies that instead of interacting with it through a graphical user interface (GUI), you do it by typing instructions into a terminal or command prompt.
ADB establishes a connection between your Android smartphone and PC. You can use your computer to control your gadget using this connection and use ADB Commands for Android Device interaction.
Below are few of the benefits of using the ADB commands for Android Device Control
The tool makes it possible to do a variety of interactions with your Android device, which can be useful for both everyday use, development and testing.
By enabling fast access to a device from a development environment, ADB helps developers to build and testers to test applications more efficiently.
By providing access to more complex system capabilities, ADB can assist users in personalizing their devices or troubleshooting problems.
ADB allows you to install apps straight from your development environment onto the device, saving you the trouble of manually moving them there. With the ability to perform diagnostic commands and retrieve error logs, it also aids in troubleshooting applications.
System logs (logcat), which are essential for debugging apps and comprehending system events, can be retrieved by ADB.
When you need to remotely manage a device or perform automated testing, it can mimic human activities like key presses and touch events.
We’ll go over the most crucial ADB commands in this tutorial, along with their advantages and uses and explore ADB command for Android Device.
But before you start using the ADB commands to manipulate the Android device, you’ll first need to set up your device and computer to enable communication.
Explore Following ADB Commands to manipulate Android Device
Device Connectivity
Getting comprehensive details on your connected device is one of the most basic things you can do with ADB. For developers and testers who must comprehend the context in which their apps are operating, this is essential which can be achieved by executing the subsequent ADB command:
adb devices
This command displays a list of every device that is either networked or USB-connected to your computer. It provides a list of device serial numbers and their status, which can be
1. “unauthorized” (meaning the device is connected but not authorized for ADB communication),
2. “offline” (meaning the device is connected but not ready for commands), or
3. “device” (meaning the device is connected and ready for commands). Making sure your device is correctly connected and prepared to communicate with ADB starts with doing this.
Device Information
It is critical to understand the characteristics of your Android device. The following is a straightforward shell command that provides detailed information about the currently connected device.
adb shell getprop
This command provides a list of exclusive device properties, including the phone number in store memory, the handle number for GSM radio standard communications, the operating system version, the device model, hardware details, and even kernel panics, if necessary. You can think of it as an inside perspective, opening a revealing window that can help identify problems and combine them with public report papers on security bugs to quickly ensure compatibility.
Battery Power
Battery management is a crucial part of the app development and testing process. You can simulate different battery conditions and examine how your program behaves in response by setting the battery level using adb shell dumpsys.
adb shell dumpsys battery set level 100
This command sets the battery level to 100%, but you can alter it to any percentage you wish. This is quite handy for evaluating your app’s response to low battery conditions, power saving settings, and charging state.
Bluetooth Enable
Many applications rely on Bluetooth capability, particularly those involving IoT devices, wearables, and music streaming. Use the following command to see if Bluetooth is enabled:
adb shell settings get global bluetooth_on
A ‘1’ in the terminal response indicates Bluetooth is enabled, whereas a ‘0’ indicates Bluetooth is turned off.
To turn on Bluetooth, use
adb shell am start -a android.bluetooth.adapter.action.REQUEST_ENABLE
To turn off Bluetooth:
adb shell am start -a android.bluetooth.adapter.action.REQUEST_DISABLE
Using these commands involves significantly less hardware labor when testing the Bluetooth functionality of your own software.
WiFi Control
Many apps require a Wi-Fi connection. To control the Wi-Fi status on your device, enter the following information:
– Enable Wi-Fi:
adb shell svc wifi enable
– Disable Wi-Fi:
adb shell svc wifi disable
This is particularly handy when evaluating how your app functions without internet connectivity and providing different network situation simulations.
Airplane Mode
When in airplane mode, all wireless connections are cut off, including Bluetooth, cellular, and Wi-Fi. You can use this to test the offline functionality of your app. Use these commands to switch between Airplane mode ON and Off:
To enable Airplane mode:
adb shell settings put global airplane_mode_on 1 OR adb shell am broadcast -a android.intent.action.AIRPLANE_MODE --ez state true
To disable Airplane mode:
adb shell settings put global airplane_mode_on 0 OR adb shell am broadcast -a android.intent.action.AIRPLANE_MODE --ez state false
These commands help you test your app’s behavior in various connectivity scenarios.
Installing an App
Deploying your app to a physical device is a crucial step in development. To install an APK file on your device, use:
adb install path-to-apk-file\sample-app.apk
Replace the ‘path-to-apk-file’ with the location of your APK file. This command simplifies the installation process across multiple devices, ensuring consistency.
Putting an App in the Background
Testing how your app behaves when sent to the background and brought back to the foreground is essential for a smooth user experience. To put an app in the background, use:
adb shell input keyevent 3
adb shell input keyevent 187
The first command simulates pressing the home button, sending the app to the background. The second command brings up the recent apps screen, allowing you to simulate app switching.
Relaunching an App
To relaunch an app, use the `monkey` tool, which sends random events to the system. This is useful for stress testing and ensuring your app can handle unexpected inputs:
Replace ` app-package ` with your app’s package name. This command launches the specified app, helping you automate the process of starting your app from a clean state.
Uninstalling an App
To uninstall an app from your device, use:
adb uninstall app-package
Replace ` app-package` with the package name of the app you want to uninstall. This command is useful for cleaning up after tests and ensuring no residual data remains on the device.
Making a Call
For apps that interact with the phone’s calling capabilities, being able to initiate calls programmatically is important. To initiate a call from your device, use:
adb shell am start -a android.intent.action.CALL -d tel:1231231234
Replace `1231231234` with the phone number you want to call. This command is useful for testing call-related features and interactions.
Rebooting the Device
Sometimes, a fresh start is needed. To reboot your device, simply use:
adb reboot
This command restarts your device, ensuring it is in a clean state before starting tests or after making significant changes.
Conclusion
These ADB instructions are but a sample of what may be accomplished using ADB by developers and testers. They undoubtedly offer a great deal of assistance with your work, including a better way to manage and debug native applications on your device and steps toward automation. These commands can be used in conjunction with your development and testing workflow to improve efficiency and ensure that, for example, an app’s behavior at various signal strengths is tested instantly. This eliminates the need for hours-long testing, which is particularly useful when a developer wants to upgrade or downgrade firmware based on how it will affect performance over varying time periods. This tool couldn’t be more powerful!
Whether you’re a seasoned developer, tester or just starting out, ADB is the only tool you need with its many helpful commands that can handle anything from making the screen as bright or obscure as you like to completely changing the way Android can be used.
Gaining control of these ADB commands will benefit you in a variety of testing and development situations, allowing your app to provide a seamless and enjoyable user experience. Thus, give them a go! I hope that learning about these commands will add a little bit of fun to your Android development and testing experience!
Click here to read more blogs like this and learn new tricks and techniques of software testing.
Apurva is a Test Engineer, with 3+ years of experience in Manual and automation testing. Having hands-on experience in testing Web as well as Mobile applications in Selenium and Playwright BDD with Java. And loves to explore and learn new tools and technologies.
When we hear the word Arduino, the first thing that comes to mind is a microprocessor chip assembled on a single board to help us to create an IOT friendly environment and IOT projects using Arduino communication.
This chip helps us create various projects and electronic devices, and moreover, facilitates physical interaction with the world of technology. Although it’s not only Arduino which is used in IOT Testing, other devices like raspberry pi are also used but Arduino is preferred first due to its cost and ease of handling.
However, Arduino is not just limited to a microprocessor chip. It is also a company that designs, manufactures, and supports electronic hardware and software and help to develop Arduino communication for IOT Application.
Arduino, moreover, makes the interaction between advanced technologies and the physical world easy and fun.
Additionally, Arduino is open-source, allowing anyone to contribute to and expand its capabilities.
Today in this blog we will cover Arduino and its different types of communication.
How Does Arduino communication using for IOT Application:
The main key to creating a project using any microprocessor is, therefore, to have an IoT-enabled environment.
To have an IOT enabled environment using any microprocessor is to be able to communicate or to be able to program and manage it accordingly, so is important in Arduino communication for IOT App.
Arduino communication is classified into two parts: wired and wireless, each with further subdivisions.
1. Wired Communication
Wired communication is, therefore, the most reliable and straightforward way to establish a connection. However, we won’t limit Arduino to simple wired communication. Instead, we’ll expand this wired communication into three distinct categories to develop IOT project/application using Arduino.
These 3 categories are explain below for Arduino Communication for IOT App:
Serial Communication:
UART communication is another name for serial communication. UART stands for universal Asynchronous receiver and transmitter.
It is mostly used for communication between Arduino and computers, but it can be applied in various other ways depending on our requirements and develop IOT application using Arduino. To establish the connection easily, we typically use the ‘Serial library’ in Arduino.
Examples for Serial Communication:
For an example let’s try to have UART communication between 2 Arduino.
Hardware Set-up
Connect the ground pin of both the Arduino.
Connect the RX pin of first Arduino with TX pin of second Arduino and TX pin of first Arduino to second Arduino.
Code Example :
Master Arduino Code:
void setup() {
Serial.begin(9600); // Initialize serial communication at 9600 bits per second
}
void loop() {
Serial.println("Hello from Master!"); // Send a message
delay(1000); // Wait for a second
}
Slave Arduino Code:
void setup() {
Serial.begin(9600); // Initialize serial communication at 9600 bits per second
}
void loop() {
if (Serial.available() > 0) { // Check if data is available to read
String message = Serial.readString(); // Read the incoming data
Serial.println("Received: " + message); // Print the received message
}
}
Here we go with a successful Arduino communication using UART
SPI Communication
SPI communication stands for Serial Peripheral Interface, which is usually used for short-distance communication and IOT projects.
The difference between serial communication and SPI communication is that, in contrast, SPI communication is synchronous and uses a clock cycle to communicate. Additionally, SPI is full-duplex communication.
In SPI you will find that there is a master slave relation where SPI has 4 wires to communicate which is MOSI (Master out Slave in) MISO (Master in Slave out) SCL (Serial clock) SS (slave select). Here, the first master sends the signal to all slaves with the identity number; consequently, the slave who has the same identity number reverts back to the master. After hearing back from the slave master, then send data to the particular slave. After the master sends the data, it then sends a clock signal to indicate that the data transfer is complete.
Examples for SPI Communication:
For example, let’s try to have SPI communication between two Arduino boards.
Hardware Set-up
First define the Pins and include SPI.H library in the code
Here let’s consider 10, 11, 12, 13 digital pins SS, MOSI, MISO, SCK on Arduino UNO.
Now let’s initialize the pins in the code:
void setup() {
SPI.begin(115200);
// Set pin modes for SS, MOSI, MISO, and SCK
pinMode(SS, OUTPUT);
pinMode(MOSI, OUTPUT);
pinMode(MISO, INPUT);
pinMode(SCK, OUTPUT);
// Set slave select (SS) pin high to disable the slave device
digitalWrite(SS, HIGH);
}
I2C Communication stands for Inter Integrated Circuit, and it is a half-duplex communication that works on a 2-wire protocol. Additionally, it has an extra overhead with start and stop bits.
I2C has an acknowledgment bit after every byte of transfer. It has a pullup resistor requirement. I2c is slower but comes with clock stretch functionality.
Examples for I2C Communication:
For an example let’s try to have I2C communication between 2 Arduino
Hardware Set-up
I2C pins on the Arduino Uno are SDA (A4) and SCL (A5).
Connect the Ground pin of Both Arduinos.
Connect the SDA pin of Master to the SDA pin of slave.
Connect the SCL pin of Master to the SCL pin of Master.
Master Arduino Code:
#include <Wire.h>
void setup() {
Wire.begin(); // Join I2C bus as master
}
void loop() {
Wire.beginTransmission(8); // Begin transmission to device with address 8
Wire.write("Hello from Master!"); // Send a message
Wire.endTransmission(); // End transmission
delay(1000); // Wait for a second
}
Slave Arduino Code:
#include <Wire.h>
void setup() {
Wire.begin(8); // Join I2C bus with address 8
Wire.onReceive(receiveEvent); // Register a function to be called when data is received
}
void loop() {
// Do nothing, everything is handled in receiveEvent
}
void receiveEvent(int howMany) {
while (Wire.available()) { // Loop through all received bytes
char c = Wire.read(); // Read each byte
Serial.print(c); // Print the received message
}
Serial.println();
}
PFB Image of Connection
2. Wireless Communication
Sometime we want to develop the projects which need to have the flexibility due which we need to have wireless connection. Don’t worry Arduino covered you there too Arduino do have 4 types wireless communication for IOT project using Arduino.
Bluetooth: As the name is very familiar to us, let’s directly move to the components which help us to attain Bluetooth connectivity. By using HC-05 and HC-06 we can attain Bluetooth connectivity using Arduino. However Bluetooth provides connectivity to a short distance.
Wifi: Sometimes we don’t need short distance coverage then we can switch to the wifi or in cases where we want the internet accessibility for some of our projects we can move to wifi connection. You can use the ESP32 and ESP8266 modules for the same purpose.
RF (Radio Frequency): Uses radio waves for medium-distance communication, with modules like nRF24L01 and RF433.
LoRa (Long Range): Great for long-distance, low-power communication, ideal for remote monitoring systems.
Conclusion
In conclusion, Arduino offers a versatile platform for various types of communication, allowing users to create innovative and interactive projects in IOT. Through serial, SPI, and I2C communications, Arduino facilitates reliable and efficient data exchange, each method catering to different needs and scenarios for exploring Arduino communication for IOT App.
Whether you are connecting multiple Arduinos or interfacing with other devices, understanding these communication protocols enhances your ability to leverage Arduino’s full potential in the IOT world.
Deepti Rana is a passionate profession keen to learn new technologies and exceling them with expertise. She thrives to bridge the gap between real time problems and technical world. With a background in IT she has taken steps in robotics too. Her background in IT doesn’t stop her to explore different fields, that can be seen through her projects where see has used GSM, GPRS, ESP32, Arduino, PS2, line sensor, ultrasonic sensors, and fingerprint sensor. She does love to share her knowledge, which she accomplishes by volunteering in Bhumi NGO.
iOS App Automation on macOS – Configuring a macOS system for testing on real iOS devices for mobile app automation is a lengthy and complicated process. The steps to follow for this configuration are tricky, and many times testers struggle with it.
The steps involved are installing the right software, setting environment variables, configuring settings on the device, and connecting the devices properly for iOS app automation. This can cause lots of issues in configuration and slow down the testing process.
In this blog, we’ll make this process easy for you to follow.
We’ll walk you through each step, from installing necessary software like the Java Development Kit (JDK) and Xcode, to setting up your iOS device for iOS app automation on macOS.
We’ll also show you how to install Appium, configure it correctly, and use tools like Appium Inspector to interact with your app.
By following this simple guide, you’ll be ready to test your mobile apps on real devices quickly and efficiently for iOS app automation on macOS.
What is Appium testing in iOS App Automationon macoS
Appium is a freely distributed open-source automation tool used for testing mobile applications. It allows testers to automate native, hybrid, and mobile web applications on iOS and Android platforms using the WebDriver protocol.
Appium provides a unified API (Application Programming Interface) that allows you to write tests using your preferred programming language (such as Java, C#, Python, JavaScript, etc.) and test frameworks. It supports a wide range of automation capabilities and handling various types of mobile elements. Appium enables cross-platform testing, where the same tests can be executed on multiple devices, operating systems, and versions, providing flexibility and scalability in mobile app testing or iOS testing. It has NO dependency on Mobile device OS; because APPIUM has a framework or wrapper that translatesSelenium WebDriver commands into UIAutomation (iOS) or UIAutomator (Android) commands depending on the device type, not any OS type.
Prerequisites for iOS Automation Setup on macOS
JDK Installation
Install npm and Node.js
Install Appium Server & Appium Inspector
Setting up environment variables
Xcode installation and setup
Install XCUITest Driver
Install WebDriverAgent
Real Device Settings
Get the device-identifier or udid of real device
Configure the desired capabilities of Appium Inspector
2. Install npm and Node.js: Download the Node.js pre-built installer for mac platform and install it. (https://nodejs.org/en/download) If already installed on the system, please check and confirm using the following commands on the terminal.
To see if Node is installed, type in node -v .
To see if NPM is installed, type in npm -v
3. Install Appium Server & Appium Inspector: Install the Appium via terminal you need to run the below commands:
npm install -g Appium
If you face permission problems, run the commands using sudo. From this link https://github.com/appium/appium-inspector/releases download the Appium Inspector for mac. You should click and download .dmg file.
4. Setting up environment variables: Do below settings in .profile file. Open a terminal and type following command-
nano ~/.profile
Then, paste the below commands: (Change your username!).
Open Xcode a Preferences -> Accounts -> Add Apple ID
6. Install XCUITest Driver: Now we need to install XCUITest driver which allows to interact with the UI elements of iOS apps during automated testing. Use following command in terminal to install XCUITest driver-
appium driver install xcuitest
7. Install WebDriverAgent: Open Terminal & Run the following command:
cd /Applications/Appium-Server-GUI.app/Contents/Resources/app/node_modules/appium/node_modules/appium-xcuitestdriver/appium-webdriveragent
To make directory in Resources folder mkdir -p Resources/WebDriverAgent.bundle
How to setup WebDriverAgent on Mac for iOS App Automation
WebDriverAgent is a WebDriver server implementation for iOS that can be used to remote control iOS devices. We need to add an account to XCode (you can use your Apple Id or create new).
For that go to XCode —> Preferences —> Accounts
Once you are signed in — your account will appear at the left.
We need to add Signing Certificate to this account iOS Automation:
1. Click on Download Manual Profiles 2. Click on Manage Certificates — Plus icon — Apple Development. Once it is done — you will see a new certificate added to the list as per screenshot below
Open the WebDriverAgent.xcodeproj project in xcode
To find it please use the path: /Applications/Appium-Server-GUI.app/ Contents/ Resources/ app/node_modules /appium/node_modules/appium-xcuitestdriver/appium-webdriveragent
(NOTE: If you do not see this folder — please use shortkeys “Shift”+”Command”+”.” to display hidden files in your Macintosh HD root)
Click on Project name at the left navigation (WebDriverAgent)
For both the WebDriverAgentLib and WebDriverAgentRunner targets, Go to Signing & Capabilities and Select the Automatically Manage Signing check box select your development team and Select your device. This should also auto select Signing Certificate. The outcome should look as shown below:
If the error below appears while changing the Bundle Identifier, we will need to change the value of Bundle Identifier to something else which xCode can accept.
The value for Bundle Identifier should be changed in the following places:
WebDriverAgentLib target:
From the Signing & Capabilities tab —change value of Bundle Identifier
From Build Settings tab — Packaging section — change value of Product Bundle Identifier
WebDriverAgentRunner target:
From Build Settings tab — Packaging section — change value of Product Bundle Identifier
IntegrationApp target:
From the Signing & Capabilities tab — change value of Bundle Identifier
From Build Settings tab —> Packaging section —> change value of Product Bundle Identifier
After changing the values of Bundle Identifier, Build the WebDriverAgentLib, WebDriverAgenrRunner, IntegrationApp from WebDriverAgent project in xcode.
8. Now for real device testing
we also need to make some changes on device side too so we need to enable developer option for this:
Open settings and click on Privacy and Security-> Developer Option:
9. Get the device-identifier or udid of real device
Once the xcode build is succeeded and Developer mode of device is turned on, get the udiid or device-identifier connected to the mac machine from xcode as well as get the bundleId.
xcode- windows-Devices and Simulators – check the identifier and other device details, which are required to define capabilities to connect to device either programmatically or through Appium inspector.
Get the bundleId from xcode-
A bundle ID, also known as a CFBundleIdentifier. It is a unique identifier for an app in Xcode, allowing the system to distinguish it.
Bundle IDs are typically written in reverse-DNS format and can only contain alphanumeric characters (A–Z, a–z, and 0–9), hyphens (-), and periods (.). They are also case-insensitive.
BundleId is required to define capabilities to connect to the device either programmatically or through Appium inspector. For that, from xcode, select the top project item in the project navigator at the left then select TARGETS -> General. Bundle Identifier is found under Identity.
After all the settings, you need to build the xcode project from the Terminal – for that we need to run the following command: (xcodebuild -project WebDriverAgent.xcodeproj -scheme WebDriverAgentRunner -destination ‘id=udid’ test ) from the location where the WebDriverAgent project is present. To go to that location first run the command-
cd /Applications/Appium-Server-GUI.app/Contents/Resources/app/node_modules/appium/node_modules/appiumxcuitestdriver/appiumwebdriveragent
Now run the command with device identifier and WebDriverAgent project location.
After the above configurations, Start Appium Server from the terminal. Start Appium Inspector and set the desired capabilities.
10. Configure the desired capabilities and other settings of Appium Inspector:
Appium inspector is a tool which provides testers with a graphical user interface for inspecting and interacting with elements within mobile applications.When setting up automation with Appium for ios devices, it’s crucial to define the desired capabilities appropriately. These capabilities act as parameters that instruct Appium on how to interact with the device and the application under test.
Open the Appium Inspector and enter Remote Host as 0.0.0.0 and Remote Port as 4723. and set the following parameters as desired capabilities:
deviceName: This parameter specifies the name of the testing device. It’s essential to provide an accurate device name to ensure that Appium connects to the correct device.
udid: The Unique Device Identifier (UDID) uniquely identifies the device among all others. Appium uses this identifier to target the specific device for automation. Make sure to input the correct UDID of the device you intend to automate.
platformName: The platform name is set to “iOS,” indicating that the automation targets the iOS platform.
platformVersion: This parameter denotes the version of the ios platform of the device.
automationName: Appium supports multiple automation frameworks, and here, “XCUITest” is specified as the automation name. XCUITest is a widely used automation framework for testing iOS apps.
bundleId: This unique identifier for an app in Xcode allows the system to distinguish it.
Once you set all the above capabilities, click the Start Session button to open the application in Appium Inspector with the specified capabilities. Your app is now ready for inspection to prepare for efficient automation testing.
also you can see the following image on your device’s screen:
Conclusion
Setting up Appium for testing on real iOS devices can initially seem complicated due to the numerous steps involved and the technical nuances of configuring software and environment variables. However, by following this step-by-step guide, the process becomes easy and manageable.
Having the right tools and configurations in place streamlines your testing workflow, ensuring efficient and effective testing of your mobile apps on real devices. This not only improves the quality of your apps but also enhances your overall development process.
Remember, the key to successful automation testing is meticulous setup and configuration. By taking the time to follow each step carefully, you will save yourself from potential issues down the line and make your testing process smoother.
Trupti is a Sr. SDET at SpurQLabs with overall experience of 9 years, mainly in .NET- Web Application Development and UI Test Automation, Manual testing. Having hands-on experience in testing Web applications in Selenium, Specflow and Playwright BDD with C#.