Writing effective test cases is crucial for ensuring software quality and reliability. A well-structured test case not only helps identify defects but also ensures that the software behaves as expected under various conditions. Below are best practices and guidelines for writing clear, concise, reusable, and comprehensive test cases.
What is a Test Case?
A tester uses a specific set of conditions or variables to determine whether a system, software application, or one of its features works as intended.
Example: You are testing the Login pop-up of the leading E-commerce platforms. You’ll need several test cases to check if all features of this page are working smoothly.
Steps to ask yourself 3 questions Before You Write Effective Test Case:
Choose your approach to test case design: your approach influences your test case design. Are you doing black box testing (you don’t have access to the code source) or white box testing (you have access to the source code)? Are you doing manual testing or automation testing?
Choose your tool/framework for test case authoring: are you using frameworks or tools to test? What level of expertise do these tools/frameworks require?
Choose your execution environment: this ties up closely with your test strategy. Do you want to execute across browsers/OS/environments? How can you incorporate that into your test script?
Once all those 3 questions have been answered, you can start the test case design and eventually test authoring. It’s safe to say that 80% of writing a test case belongs to the planning and designing part, and only 20% is actually scripting. Writing effective test case design is key to achieving good test coverage.
How to Design a Effective Test Case?
Write effective test cases – when we don’t need to understand the details of how the software works, we focus on checking if it meets user expectations. We explore the system to come up with test ideas. However, this approach can result in limited testing, as we might overlook features with unusual behaviour.
In that case, here are some techniques for you to design your test cases:
Equivalence Class Testing: In Equivalence Class Testing, you divide input data into groups and treat all values in each group the same way.
Example: For an age input field that accepts ages from 18 to 65, you can choose 3 values for 3 equivalence classes and test with one value from each group. That means you have 3 test cases. You can choose:
Boundary Value Analysis: this is a more granular version of equivalence class testing. Here you test values at the edges of input ranges to find errors at the boundaries.
Example: For an age input that accepts values from 18 to 65, you choose up to 6 values to test (which means you have 6 test cases):
17 (just below) 18 (at the boundary) 19 (just above) 64 (just below) 65 (at the boundary) 66 (just above)
Decision Table Testing: you use a table to test different combinations of input conditions and their corresponding actions or results.
Example: Here’s a decision table for a simple loan approval system. Specifically, the system approves or denies loans based on two conditions: the applicant’s credit score and the applicant’s income. From this table, you can write 6 test cases.
How to write effective Test Case
Standard Test Case Format
We use a test case to check if a feature or function in an app works properly. It has details like conditions, inputs, steps, and expected results. A good test case makes testing easy to understand, repeat, and complete.
Components of a Standard Effective Test Case
Test Case ID: Give a unique ID like “TC001” or “LOGIN_001” to every test case. This helps in tracking.
Test Case Description: Write a short description of what the test case tests. For example, “Test login with correct username and password.”
Preconditions: Mention any setup needed before starting.
Test Data: List the inputs for the test. Like, “Username: test_user, Password: Test@123.”
Test Steps: Write step-by-step actions for the test. Keep it clear and simple.
Expected Results: Describe what should happen if everything works. For example, “User logs in and sees the dashboard.”
Actual Results: Note what happened during the test. This is written after running the test.
Pass/Fail Status: Mark if the test passed or failed by comparing expected and actual results.
Remarks/Comments: Add any extra info like problems faced, defect IDs, or special notes.
Example of a Standard Test Case Format
How to write effective test cases: A step-by-step guide
If I explain to you in just a two-line summary of how to write an effective manual test case, it would be:
1. Identify the feature or functionality you wish to test. 2. Next, create a list of test cases that define specific actions to validate the functionality. Now, let’s explore the detailed steps for writing test cases.
Step 1 – Test Case ID:
Additionally, assign a unique identifier to the test case to help the tester easily recall and identify it in the future.
Example: TC-01: Verify Login Functionality for a User
Step 2 – Test Case Description:
We will describe the test case, explaining its purpose and expected behaviour. For example:
Test Case Description: Logging into the application Given: A valid username and password When: User enters credentials on the login page Then: User logs in successfully and is directed to the home page.
Step 3 – Pre-Conditions:
We will document any pre-conditions needed for the test, such as specific configuration settings.
Step 4 – Test Steps:
We will document the detailed steps necessary to execute the test case. This includes deciding which actions should be taken to perform the test and also possible data inputs.
Example steps for our login test:
Launch the login application under test.
Enter a valid username and password in the appropriate fields.
Click the ‘Login’ button.
Verify that the user has been successfully logged in.
Log out and check if the user is logged out of the system.
Step 5 – Test Data:
We will define any necessary test data. For example, if the test case needs to test that login fails for incorrect credentials, then test data would be a set of incorrect usernames/passwords.
Step 6 – Expected Result:
Next, we will provide the expected result of the test, which the tester aims to verify. For example, here are ways to define expected results:
A user should be able to enter a valid username and password and click the login button.
The application should authenticate the user’s credentials and grant access to the application.
The invalid user should not be able to enter the valid username and password; click the login button.
The application should reject the user’s credentials and display an appropriate error message.
Step 7 – Post Condition:
The tester is responsible for any cleanup after the test, including reverting settings and removing files created during the test. For example:
Successful login with valid credentials.
Error message for invalid credentials.
Secure storage of user credentials.
Correct redirection after login.
Restricted access to pages without login.
Protection against unauthorized data access.
Step 8 – Actual Result:
We will document the actual result of the test. This is the result the tester observed when running the test. Example: After entering the correct username and password, the user is successfully logged in and is presented with the welcome page.
Step 9 – Status:
The tester will report the status of the test. If the expected and actual results match, the test is said to have passed. The tester marks the test as failed if the results do not match.
Manual and automated test cases share some common elements, but when using automation, include these 6 key elements. Those are: preconditions, test steps, sync and wait, comments, debugging statements, and output statements.
Best Practice for writing effective Test Case
Follow key best practices to write effective test cases.
First, identify the purpose of the test case and determine exactly what needs to be tested.
Write the test case clearly and concisely, providing step-by-step instructions. Also, it is important to consider all possible scenarios and edge cases to ensure thorough testing.
It is always to review and refine your test cases occasionally to maintain their quality over time.
By following these best practices for writing effective test cases, we can increase the chances of spotting defects early in the software development process, ensuring optimal performance for end use.
Benefits of writing high-quality and effective Test cases
Indeed, writing effective test cases is important because it ensures high-quality software. Moreover, well-written test cases provide multiple benefits.
Let me narrow down to some essential facts here:
Accurate Issue Identification: High-quality test cases ensure thorough testing and accurate identification of bugs.
Better Test Coverage: Test cases evaluate different aspects of the software, identifying bugs before release.
Improved Software Quality: Identifying issues early reduces repair costs and improves software reliability.
Better Collaboration: High-quality test cases help stakeholders work together, improving communication and resources.
Enhanced User Experience: Test cases improve the software’s usability, enhancing the end user’s experience.
Conclusion
Writing effective test cases is a systematic process that requires attention to detail and clarity. By following these best practices—understanding requirements, structuring test cases properly, covering various scenarios, ensuring reusability, documenting results, and regularly reviewing your work—you will create a robust testing framework that enhances software quality. Implementing these guidelines will not only streamline your testing process but also contribute significantly to delivering high-quality software products that meet user expectations.
Detail-oriented QA professional with 1–2 years of hands-on experience in manual and basic automation testing. Skilled in identifying bugs, writing test cases, executing regression and functional tests, and reporting issues using tools like JIRA. Proficient in test case design, UI testing, and basic knowledge of SQL and APIs. Known for strong communication, collaboration with cross-functional teams, and ensuring software quality through thorough validation and documentation.
In today’s digital-first world, how a customer experiences your website, app, or product can make or break your brand. People expect smooth, fast, and problem-free interactions. Customers can quickly lose interest if an app crashes or a product doesn’t perform as expected. They might even switch to a competitor. That’s why companies must invest in product quality, not just for technical reasons, but also to improve their marketing outcomes and build brand loyalty.
Ensuring product quality means making sure everything works as it should. From small features to large-scale operations, quality assurance checks that the user’s journey is smooth and reliable. When customers see that a brand delivers consistent and high-quality experiences, they are more likely to stay loyal and recommend it to others. So, let’s understand how product quality and brand loyalty go hand-in-hand.
1. Better Product = Better Customer Experience
Let’s start with a simple question: Would you continue using a product that keeps crashing or fails to perform reliably? Most people won’t. Studies show that poor user experience is one of the top reasons people stop using digital products.
A smooth, bug-free app or website—or a well-functioning physical product—shows customers that a brand is professional, reliable, and cares about their experience. And how do brands ensure that? Through rigorous quality checks and validation.
Quality assurance helps identify issues like:
Pages are not loading properly
App buttons not working
Forms not submitting
Payment gateways failing
Features behaving differently on different devices
When these issues are resolved before launch, the user has a positive first impression. A good experience often means the user will come back, make a purchase, and even recommend it to others. That’s brand loyalty in action.
2. Quality Products Protect Brand Reputation
A brand’s image is more than just a logo or advertisement—it’s also how well the product performs. If users associate a brand with unreliable apps, slow websites, or confusing interfaces, the reputation takes a hit.
Example: Sonos App Redesign Backlash (2024) In May 2024, Sonos, a premium audio brand, launched a major update to its mobile app, aiming to enhance performance and customization. However, the redesign was met with widespread criticism due to missing features and numerous bugs. Users reported issues like broken local music library management, missing sleep timers, and unresponsive controls. The backlash was significant, leading to a decline in customer trust and a drop in stock prices. Sonos acknowledged the problems and committed to regular updates to fix the issues.
This incident underscores the critical importance of thorough product testing and quality assurance before releasing updates. A well-validated product not only ensures a smooth user experience but also protects the brand’s reputation and customer loyalty.
3. Great Marketing Campaigns Need Flawless Quality
Marketers spend time and money creating exciting campaigns—ads, social media posts, emails, and offers. But what happens when customers click through, and the landing page doesn’t load? Or does the sign-up form crash?
All that effort is wasted.
This is where product quality and marketing go hand-in-hand. Before launching any campaign, the end-to-end user experience must be validated:
Can the customer access the link?
Does the mobile version work correctly?
Can they complete a transaction?
Does the thank-you message show up?
High product quality ensures the campaign works as planned and gives customers a seamless experience, increasing conversions and trust.
4. Builds Trust Through Consistency
Trust is built when customers consistently receive what they expect. If a brand’s app works great one day and crashes the next, people will feel uncertain about using it again. But if the experience is reliable every time, they’ll feel comfortable sticking around.
Ongoing quality assurance efforts make this possible. Even after launch, brands must validate updates, new features, and changes to ensure nothing breaks. This shows users that the brand:
Cares about their experience
Takes feedback seriously
Works to continuously improve
Over time, this consistent performance builds strong customer loyalty.
5. Improves Retention Rates
Acquiring new customers is more expensive than keeping existing ones. One major reason customers leave is a poor user experience. If they struggle to log in, make a purchase, or navigate a product, they’ll quit—and maybe never return.
With high product quality, retention rates improve. Features work as expected. Apps load quickly. Users can complete tasks without stress. Happy users = returning users.
Ensuring product quality also means catching issues early, saving money and effort in fixing problems later, and preventing customer churn.
6. Encourages Word-of-Mouth & Reviews
Loyal customers are often your best marketers. When they have a great experience with your product, they tell others. They leave positive reviews, share on social media, and recommend your brand.
On the flip side, one bad product experience can lead to:
1-star reviews on app stores
Negative posts on social platforms
Bad word-of-mouth, which can hurt new customer growth
High product quality acts as a shield. It reduces the chances of negative feedback and increases the likelihood of glowing reviews, which is gold for marketing teams.
Conclusion
Product quality is more than a technical concern—it’s a powerful asset for marketing. When quality is prioritized, it leads to:
Fewer issues
Happier users
Positive reviews
Stronger brand image
Higher customer retention
Better ROI on marketing campaigns
In a crowded market where customers have endless choices, the brands that stand out are the ones that consistently deliver quality. And that quality comes from testing, validating, and refining your product before customers see it.
Marketers who work closely with product and quality teams can ensure every campaign, product, and user journey is optimized for success. That’s how brands earn trust, create loyalty, and grow over the long term.
Mansi is a Digital Marketing Executive with a strong interest in content strategy, SEO, and social media marketing. She is passionate about building brand presence through creative and analytical approaches. In her free time, she enjoys learning new digital trends and exploring innovative marketing tools.
Appium and Python Visual Testing – Have you ever wondered how your app’s toggle switches look on different devices? Visual elements like toggle buttons play a crucial role in the user experience, and verifying their color states is more than just cosmetic; it’s a matter of functionality, accessibility, and trust.
In this blog, we’ll explore how to verify toggle colors on real Android and iOS devices using Appium and Python—for visual testing, a practical guide for mobile automation testers who want to ensure their apps don’t just work, but look right too.
We’ll dive into:
Why is color detection essential in domains like e-commerce, healthcare, gaming, and automotive?
Three powerful techniques for verifying toggle states:
Accessibility Identifiers
Image Comparison
Pixel-Level RGB Color Extraction
Step-by-step examples for both Android and iOS devices.
Importance of Color Detection in Visual Testing
Color detection plays a crucial role in image verification across various domains, where visual accuracy directly impacts user experience, brand integrity, and functionality. Below are some key applications:
E-commerce: Accurate color representation of products is vital for online shopping platforms. Image verification ensures product photos match real-life appearances, reducing return rates and increasing customer trust.
Advertising and Marketing: Consistent brand identity depends on precise color reproduction in ads, banners, and promotional content. Image testing helps maintain visual alignment with brand guidelines across different platforms and formats.
Gaming: Visual elements like character designs, backgrounds, and effects contribute to the immersive quality of a game. Testing ensures that color schemes, contrasts, and visuals meet design standards and enhance gameplay.
Healthcare and Medical Imaging: In medical diagnostics, accurate color detection in images like X-rays, MRIs, and pathology slides is critical. Image verification supports precise interpretation, leading to better patient outcomes.
Automotive: Vehicle interfaces and design previews rely on color-accurate visuals. Testing ensures that dashboards, infotainment systems, and design prototypes reflect real-world colors and improve user experience.
So, let’s dive into verifying toggle colors on Android and iOS app step by step
Set your system for Appium and Python Visual Testing for real Android and IOS testing
Using the Accessibility Identifiers: Utilizing accessibility identifiers (e.g., accessibility_id, content-desc, checked attribute from XPath) to determine the toggle’s state. These identifiers provide semantic information about the element, which is more reliable than relying solely on visual appearance.
Implementation: Use driver.find_element(AppiumBy.XPATH) or similar methods to locate the toggle based on its accessibility identifier. Check the element’s properties or state attributes (if available) to determine whether it’s “checked” or “unchecked.”
Advantages: Most reliable and maintainable approach, as it relies on semantic information rather than visual appearance.
Using Image Comparison: Capture a screenshot of the toggle in its “On” state and another in its “Off” state. Then, compare the screenshot of the actual toggle with the stored “On” and “Off” images.
Implementation: Use image comparison libraries like scikit-image or opencv-python to calculate similarity metrics (e.g., pixel-wise difference, structural similarity index). Determine the state based on the highest similarity score with the stored “On” or “Off” images. The below snippet will check whether the actual and ideal images are same or not.
img1 = imageio.imread(ideal_image_path)
img2 = imageio.imread(actual_image_path_repo_root)
if img1.shape != img2.shape:
print("Both images should have the same dimensions")
raise selenium.common.NoSuchElementException('No such element present')
diff = np.sum(np.abs(img1 - img2))
avg = diff / (img1.shape[0] * img1.shape[1] * img1.shape[2])
percentage = (avg / 255) * 100
if percentage == 0:
return True
else:
return False
In the above snippet, using opencv library from Python we are comparing the images first using the size of both the images, then calculating the average difference per Pixel for both the images.
Advantages: More robust to minor color variations compared to pixel color extraction.
Considerations: Maintaining a library of reference images for different toggle states is required.
Using Pixel Color Extraction:
The RGB (Red, Green, Blue) color model is one of the most widely used systems in digital image processing and display technologies. It is an additive color model, meaning it combines the intensity of these three primary colors(RGB) to create a broad spectrum of colors. Each color in this model is represented as a combination of Red, Green, and Blue values, ranging from 0 to 255 for 8-bit images.
For example:
(255, 0, 0) represents pure red.
(0, 255, 0) represents pure green.
(0, 0, 255) represents pure blue.
(255, 255, 255) represents white.
(0, 0, 0) represents black.
How RGB Detection Works:
RGB detection involves extracting the Red (R), Green (G), and Blue (B) intensity values of individual pixels from digital media such as images or videos. Each pixel acts as a building block of the media, storing its color as a combination of these three values.
For image comparison in Python install pillow package using – from PIL import Image
Load the image – image = Image.open(‘example.jpg’)
Access the pixel at any location – rgb = image.getpixel((50, 50))
This will return the RGB value for that particular point. Open this website https://www.rapidtables.com/web/color/RGB_Color.html. Here you can find the color type according to RGB values, like if this method is returning the (255,215,0), which means it’s GOLD color.
By entering these values, you can find the color. Also like by entering 0,0,0 you can find the black color.
For demo purposes, let’s open the settings of android→connections→wifi toggle and check whether it’s turned ON or OFF.
Use code below for reference of color detection on a real Android device (Python Visual Testing)
Pre-setup for Android Device
Start Appium server using the below command, or you can use Appium GUI as well
appium -a 127.0.0.1 -p 4723
Check connected adb devices using the below command, and you should be able to see a connected device with the device UDID
Here you can find the color type according to RGB values
Use the code below for reference of colour detection on a real iOS device (Appium Visual Testing)
Pre-setup for iOSDevice
Start Appium server using the below command, or you can use Appium GUI as well
appium -a 127.0.0.1 -p 4723
pip install Pillow
We have to use build command to build our project and start our testing on real iOS. For IOS id→xcode→Window→device and simulators→Identifier (e.g. –xcodebuild -project (path_for_WebDriverAgent.xcodeproj) -scheme WebDriverAgentRunner -destination ‘platform=iOS,id=(id_of_connected_ios) test)
For demo purposes let’s open settings of i-Phone→wifi settings
Consider the code below for color detection on iOS automation
Let’s break down the code – If you see the code, it’s similar to the Android color verification code The two key differences are like first one is the capabilities are different for iOS, and the locator finding strategy is different.
Conclusion
There are three primary methods for verifying toggle colors using Appium and Python Visual Testing
1. Accessibility Identifiers:
This is the most straightforward and reliable approach. Mobile apps often include labels or attributes (like accessibility_id or content-desc) that indicate the current state of a toggle. This method requires no image processing, as it leverages metadata provided by developers—making it both efficient and robust.
2. Image Comparison:
This technique involves capturing screenshots of the toggle in both “on” and “off” states and comparing them to reference images. Tools like OpenCV or scikit-image help analyze visual similarity, accounting for minor differences due to lighting or device variations. It’s especially useful when you need to validate the UI’s visual accuracy.
3. Pixel Color Extraction:
By extracting specific RGB values from toggle regions using libraries like Pillow, this method offers precision at the pixel level. It’s ideal for verifying exact color codes, and the extracted values can be cross-referenced with tools like RapidTables for further validation. While Android and iOS may differ slightly in setup and element location, the core strategies remain consistent. Depending on your testing needs, you can use these methods individually or in combination to ensure your app displays the correct colors—ultimately contributing to a seamless and visually consistent user experience.
Junior Software Development Engineer in Test (JR. SDET) with 1 year of hands-on experience in automating and testing mobile applications using Python and Appium. Proficient in Selenium and Java, with a solid understanding of real device testing on both iOS and Android platforms. Adept at ensuring the quality and performance of applications through thorough manual and automated testing. Skilled in SQL and API testing using Postman.
Looking to simplify your UI test automation without compromising on speed or reliability?
Welcome to CodeceptJS + Puppeteer — a powerful combination that makes browser automation intuitive, maintainable, and lightning-fast. Whether you’re just stepping into test automation or shifting from clunky Selenium scripts, this CodeceptJS Puppeteer Guide will walk you through the essentials to get started with modern JavaScript-based web UI testing.
Why CodeceptJS + Puppeteer?
Beginner-Friendly: Clean, high-level syntax that’s easy to read—even for non-coders.
Stable Tests: Auto-waiting eliminates the need for flaky manual waits.
Built-in Helpers & Smart Locators: Interact with web elements effortlessly.
CI/CD Friendly: Easily integrates into DevOps pipelines.
Rich Debugging Tools: Screenshots, videos, and console logs at your fingertips.
In this blog, you’ll learn:
How to install and configure CodeceptJS with Puppeteer
Writing your first test using Page Object Model (POM) and Behavior-Driven Development (BDD)
Generating Allure Reports for beautiful test results
Tips to run, debug, and manage tests like a pro
Whether you’re testing login pages or building a complete automation framework, this guide has you covered.
Ready to build your first CodeceptJS-Puppeteer test? Let’s dive in!
1. Initial Setup
Prerequisites
Node.js installed on your system. (Follow below link to Download and Install Node.)
https://nodejs.org/
Basic knowledge of JavaScript.
Installing CodeceptJS Run the following command to install CodeceptJS and its configuration tool: npm install codeceptjs @codeceptjs/configure –save-dev
2. Initialize CodeceptJS
Create a New Project
Initialize a new npm project using following commend:
npm init –y
Install Puppeteer Install Puppeteer as the default helper: npm install codeceptjs puppeteer –save-dev
Setup CodeceptJS Run the following command to set up CodeceptJS: npx codeceptjs init
As shown below, follow the steps as they are; they will help you build the framework. You can choose Puppeteer, Playwright, or WebDriver—whichever you prefer. Here, I have used Puppeteer to create the framework
This will guide you through the setup process, including selecting a test directory and a helper (e.g., Puppeteer).
3. Writing Your First Test
Example Test Case
The following example demonstrates a simple test to search “codeceptjs” on Google:
Dependencies
Ensure the following dependencies are included in your package.json:
A simple test case to perform a Google search is shown below:
Feature('google_search');
Scenario('TC-1 Google Search', ({ I }) => {
I.amOnPage('/');
I.seeElement("//textarea[@name='q']");
I.fillField("//textarea[@name='q']", "codeceptjs");
I.click("btnK");
I.wait(5);
});
4. As we have seen how to create a simple test, we will now explore how to create a test in BDD using the POM approach.
Using Page Object Model (POM) and BDD
CodeceptJS supports BDD through Gherkin syntax and POM for test modularity. If you want to create a feature file configuration, use this command. “npx codeceptjs gherkin:init”
The setup will be created; however, some configurations still need to be modified, as explained below. You can refer to the details provided.
After this, the following changes will be displayed in the CodeceptJS configuration file. Ensure that these changes are also reflected in your configuration file.
A Feature file in BDD is a plain-text file written in Gherkin syntax that describes application behavior through scenarios using Given-When-Then steps. Example: Orange HRM Login Test Feature: Orange HRM
Scenario: Verify user is able to login with valid credentials Given User is on login page When User enters username “Admin” and password “admin123” When User clicks on login button Then User verifies “Dashboard” is displayed on page
Step Definitions
A Step Definitions file in BDD maps Gherkin step definitions to executable code, linking test scenarios to automation logic. Define test steps in step_definitions/steps.js:
const { I } = inject();
const { LoginPage } = require('../Pages/LoginPage');
const login = new LoginPage();
Given('User is on login page', async () => {
await login.homepage();
});
When('User enters username {string} and password {string}', async (username, password) => {
await login.enterUsername(username);
await login.enterPassword(password);
});
When('User clicks on login button', async () => {
await login.clickLoginButton();
});
Then('User verifies {string} is displayed on page', async (text) => {
await login.verifyDashboard(text);
});
Page Object Model
A Page File represents a web page or UI component, encapsulating locators and actions to support maintainable test automation. Create a LoginPage class to encapsulate page interactions:
Run tests and generate reports: npx codeceptjs run npx allure generate –clean npx allure open
6. Running Tests
To execute tests, use the following command: npx codeceptjs run
To log the steps of a feature file on the console, use the command below:
npx codeceptjs run –steps
The — verbose flag provides comprehensive information about the test execution process, including step-by-step execution logs, detailed error information, configuration details, debugging assistance, and more.
npx codeceptjs run –verbose
To target specific tests:
npx codeceptjs run <test_file>
npx codeceptjs run –grep @yourTag
Conclusion:From Clicks to Confidence with CodeceptJS & Puppeteer
In this guide, we walked through the essentials of setting up and using CodeceptJS with Puppeteer—from writing simple tests to building a modular framework using Page Object Model (POM) and Behavior-Driven Development (BDD). We also explored how to integrate Allure Reports for insightful test reporting and saw how to run and debug tests effectively.
By leveraging CodeceptJS’s high-level syntax and Puppeteer’s powerful headless automation capabilities, you can build faster, more reliable, and easier-to-maintain test suites that scale well in modern development workflows.
Whether you’re just starting your test automation journey or refining an existing framework, this stack is a fantastic choice for UI automation in JavaScript—especially when aiming for stability, readability, and speed.
Harish is an SDET with expertise in API, web, and mobile testing. He has worked on multiple Web and mobile automation tools including Cypress with JavaScript, Appium, and Selenium with Python and Java. He is very keen to learn new Technologies and Tools for test automation. His latest stint was in TestProject.io. He loves to read books when he has spare time.
Introduction to Cypress and TypeScript Automation:
Nowadays, the TypeScript programming language is becoming popular in the field of testing and test automation. Testers should know how to automate web applications using this new, trending programming language. Cypress and TypeScript automation can be integrated with Playwright and Cypress to enhance testing efficiency. In this blog, we are going to see how we can play with TypeScript and Cypress along with Cucumber for a BDD approach.
TypeScript’s strong typing and enhanced code quality address the issues of brittle tests and improve overall code maintainability. Cypress, with its real-time feedback, developer-friendly API, and robust testing capabilities, helps in creating reliable and efficient test suites for web applications.
Additionally, adopting a BDD approach with tools like Cucumber enhances collaboration between development, testing, and business teams by providing a common language for writing tests in a natural language format, making test scenarios more accessible and understandable by non-technical stakeholders.
In this blog, we will build a test automation framework from scratch, so even if you have never used Cypress, Typescript, or Cucumber, there are no issues. Together, we will learn from scratch, and in the end, I am sure you will be able to build your test automation framework.
Before we start building the framework and start with our discussion on the technology stack we are going to use, let’s first complete the environment setup we need for this project. Follow the steps below sequentially and let me know in the comments if you face any issues. Additionally, I am sharing the official website links just in case you want to take a look at the information on the tools we are using. Check here,
The first thing we need to make this framework work is Node.js, so ensure you have a node installed on the system. The very next thing to do is to have all the packages mentioned above installed on the system. How can you install them? Don’t worry; use the below commands.
So far, we have covered and installed all we need to make this automation work for us. Now, let’s move to the next step and understand the framework structure.
Framework Structure:
Let’s now understand some of the main players of this framework. As we are using the BDD approach assisted by the cucumber tool, the two most important players are the feature file and the step definition file. To make this more robust, flexible and reliable, we will include the page object model (POM). Let’s look at each file and its importance in the framework.
Feature File:
Feature files are an essential part of Behavior-Driven Development (BDD) frameworks like Cucumber. They describe the application’s expected behavior using a simple, human-readable format. These files serve as a bridge between business requirements and automation scripts, ensuring clear communication among developers, testers, and stakeholders.
Key Components of Feature Files
Feature Description:
A high-level summary of the functionality being tested.
Helps in understanding the purpose of the test.
Scenarios:
Each scenario represents a specific test case.
Follows a structured Given-When-Then format for clarity.
Scenario Outlines (Parameterized Tests):
Used when multiple test cases follow the same pattern but with different inputs.
Allows for better test coverage with minimal duplication.
Tags for Organization:
Tags like @smoke, @regression, or @critical help in organizing and running selective tests.
Makes it easier to filter and execute relevant scenarios.
Web App Automation Feature File:
Feature: Perform basic calculator operations
Background:
Given I visit calculator web page
@smoke
Scenario Outline: Verify the calculator operations for scientific calculator
When I click on number "<num1>"
And I click on operator "<Op>"
And I click on number "<num2>"
Then I see the result as "<res>"
Examples:
| num1 | Op | num2 | res |
| 6 | / | 2 | 3 |
| 3 | * | 2 | 6 |
@smoke1
Scenario: Verify the basic calculator operations with parameter
When I click on number "7"
And I click on operator "+"
And I click on number "5"
Then I see the result as "12"
API Automation Feature File:
Feature: API Feature
@api
Scenario: Verify the GET call for dummy website
When I send a 'GET' request to 'api/users?page=2' endpoint
Then I Verify that a 'GET' request to 'api/users?page=2' endpoint returns status
@api
Scenario: Verify the DELETE call for dummy website
When I send 'POST' request to endpoint 'api/users/2'
| name | job |
| morpheus | leader |
Then I verify the POST call
| req | endpoint | name | job | status |
| POST | api/users | morpheus | zion resident | 200 |
@api
Scenario: I send POST Request call and Verify the POST call Using Step Reusablity
When I send 'POST' request to endpoint 'api/users/2'
| req | endpoint | name | job |
| POST | api/users | morpheus | zion resident |
Then I verify the POST call
| req | endpoint | name | job | status |
| POST | api/users | morpheus | zion resident | 200 |
Step Definition File:
Step definition files act as the implementation layer for feature files. They contain the actual automation logic that executes each step in a scenario. These files ensure that feature files remain human-readable while the automation logic is managed separately.
Key Components of Step Definition Files
Mapping Steps to Code:
Each Given, When, and Then step in a feature file is linked to a function in the step definition file.
Ensures test steps execute the corresponding automation actions.
Reusability and Modularity:
Common steps can be reused across multiple scenarios.
Avoid duplication and improve maintainability.
Data Handling:
Step definitions can take parameters from feature files to execute dynamic tests.
Enhances flexibility and test coverage.
Error Handling & Assertions:
Verifies expected outcomes and reports failures accurately.
Helps in debugging test failures efficiently.
Web App Step Definition File:
import { When, Then, Given } from '@badeball/cypress-cucumber-preprocessor'
import { CalPage } from '../../../page-objects/CalPage'
const calPage = new CalPage()
Given('I visit calculator web page', () => {
calPage.visitCalPage()
cy.wait(6000)
})
Then('I see the result as {string}', (result) => {
calPage.getCalculationResult(result)
calPage.scrollToHeader()
})
When('I click on number {string}', (num1) => {
calPage.clickOnNumber(num1)
calPage.scrollToHeader()
})
When('I click on operator {string}', (Op) => {
calPage.clickOnOperator(Op)
calPage.scrollToHeader()
})
API Step Definition File:
import { Given, When, Then } from '@badeball/cypress-cucumber-preprocessor'
import { APIUtility } from '../../../../Utility/APIUtility'
const apiPage = new APIUtility()
When('I send a {string} request to {string} endpoint', (req, endpoint) => {
apiPage.getQuery(req, endpoint)
})
Then(
'I Verify that a {string} request to {string} endpoint returns status',
(req, endpoint) => {
apiPage.iVerifyGETRequest(req, endpoint)
},
)
Then('I verify that {string} request to {string} endpoint', (datatable) =>
apiPage.postQueryCreate(datatable)
})
Then('I verify the POST call', (datatable) => {
apiPage.postQueryCreate(datatable)
})
When('I send {string} request to endpoint {string}', (req, endpoint) => {
apiPage.delQueryReq(req, endpoint)
})
Then(
'I verify {string} request to endpoint {string} returns status',
(req, endpoint) => {
apiPage.delQueryReq(req, endpoint)
},
)
Page File:
Page files in test automation frameworks serve as a structured way to interact with web pages while keeping test scripts clean and maintainable. These files typically encapsulate locators and actions related to a specific page or component within the application under test.
Key Components of Page Files in Test Automation Frameworks
Navigation Methods:
Functions to visit the required page using a URL or base configuration.
Ensures tests always start from the correct application state.
Element Interaction Methods:
Functions to interact with buttons, input fields, dropdowns, and other UI elements.
Encapsulates actions like clicking, typing, or selecting options to maintain reusability.
Assertions and Validations:
Methods to verify expected outcomes, such as checking if an element is visible or a value is displayed correctly.
Helps in ensuring the application behaves as expected.
Reusability and Modularity:
Each function is designed to be reusable across multiple test cases.
Keeps automation scripts clean by avoiding redundant code.
Handling Dynamic Elements:
Includes waits, scrolling, or retries to ensure elements are available before interaction.
Reduces flakiness in tests.
Test Data Handling:
Functions to pass dynamic test data and execute actions accordingly.
API utility files are essential in automated testing as they provide reusable methods to interact with APIs. These files help testers perform API requests, validate responses, and maintain structured automation scripts.
By centralizing API interactions in a dedicated utility, we can improve test maintainability, reduce duplication, and ensure consistent validation of API responses.
Key Components of an API Utility File:
Making API Requests Efficiently:
Functions for sending GET, POST, PUT, and DELETE requests.
Uses dynamic parameters to handle different endpoints and request types.
Response Validation & Assertions:
Ensures correct HTTP status codes are returned.
Validates response bodies for expected data formats.
Logging & Debugging:
Captures API request and response details for debugging.
Provides meaningful logs to assist in troubleshooting failures.
Handling Dynamic Data:
Supports dynamic payloads using external test data sources.
Allows testing multiple scenarios without modifying the core test script.
Error Handling & Retry Mechanism:
Implements error handling to manage unexpected API failures.
Can include automatic retries for transient errors (e.g., 429 rate limiting).
Security & Authentication Handling:
Supports authentication headers (e.g., tokens, API keys).
Ensures tests adhere to security best practices like encrypting sensitive data.
Currently, the base URL is fetched from Cypress.env(‘api_URL’), but we can extend it to support multiple environments (e.g., dev, staging, prod).
Enhance Error Handling & Retry Logic:
Implement a retry mechanism for APIs that occasionally fail due to network issues.
Improve error messages by logging API response details when failures occur.
Support Query Parameters & Headers:
Modify functions to accept optional query parameters and custom headers for better flexibility.
Improve Response Validation:
Extend validation beyond just checking the status code (e.g., validating response schema using JSON schema validation).
Use Utility Functions for Reusability:
Extract common assertions (e.g., checking response status, verifying keys in the response) into separate utility functions to avoid redundancy.
Implement Rate Limiting Controls:
Introduce a delay between API requests in case of rate-limited endpoints to prevent hitting request limits.
Better Logging & Reporting:
Enhance logging to provide detailed information about API requests and responses.
Integrate with test reporting tools to generate detailed API test reports.
Configuration Files:
Cypress.config.ts:
The Cypress configuration file (cypress.config.ts) is essential for defining the setup, plugins, and global settings for test execution. It helps in configuring test execution parameters, setting up plugins, and customizing Cypress behavior to suit the project’s needs.
This file ensures that Cypress is properly integrated with necessary preprocessor plugins (like Cucumber and Allure) while defining critical environment variables and paths.
Key Components of the Configuration File:
Importing Required Modules & Plugins:
Cypress needs additional plugins for Cucumber support and reporting.
@badeball/cypress-cucumber-preprocessor is used for running .feature files with Gherkin syntax.
@shelex/cypress-allure-plugin/writer helps in generating test execution reports using Allure.
@esbuild-plugins/node-modules-polyfill ensures compatibility with Node.js modules.
Setting Up Event Listeners & Preprocessors:
The setupNodeEvents function is responsible for handling plugins and configuring Cypress behavior dynamically.
The Cucumber preprocessor generates JSON reports and processes Gherkin-based test cases.
Browserify is used as the file preprocessor, allowing TypeScript support in tests.
Environment Variables & Custom Configurations:
api_URL: Stores the base API URL used for API testing.
screenshotsFolder: Defines the folder where Cypress will save screenshots in case of failures.
Defining E2E Testing Behavior:
setupNodeEvents: Attaches the preprocessor and other event listeners.
excludeSpecPattern: Ensures Cypress does not pick unwanted file types (*.js, *.md, *.ts).
specPattern: Specifies that Cypress should look for .feature files in cypress/e2e/.
baseUrl: Defines the website URL where tests will be executed (https://www.calculator.net/).
import { defineConfig } from 'cypress'
import { addCucumberPreprocessorPlugin } from '@badeball/cypress-cucumber-preprocessor'
import browserify from '@badeball/cypress-cucumber-preprocessor/browserify'
import allureWriter from '@shelex/cypress-allure-plugin/writer'
const {
NodeModulesPolyfillPlugin,
} = require('@esbuild-plugins/node-modules-polyfill')
async function setupNodeEvents(
on: Cypress.PluginEvents,
config: Cypress.PluginConfigOptions,
): Promise<Cypress.PluginConfigOptions> {
// This is required for the preprocessor to be able to generate JSON reports after each run, and more,
await addCucumberPreprocessorPlugin(on, config)
allureWriter(on, config),
on(
'file:preprocessor',
browserify(config, {
typescript: require.resolve('typescript'),
}),
)
// Make sure to return the config object as it might have been modified by the plugin.
return config
}
export default defineConfig({
env: {
api_URL: 'https://reqres.in/',
screenshotsFolder: 'cypress/screenshots',
},
e2e: {
// We've imported your old cypress plugins here.
// You may want to clean this up later by importing these.
setupNodeEvents,
excludeSpecPattern: ['*.js', '*.md', '*.ts'],
specPattern: 'cypress/e2e/**/*.feature',
baseUrl: 'https://www.calculator.net/',
},
})
Tsconfig.json:
The tsconfig.json file is a TypeScript configuration file that defines how TypeScript code is compiled and interpreted in a Cypress test automation framework. It ensures that Cypress and Node.js types are correctly recognized, allowing TypeScript-based test scripts to function smoothly.
Key Components oftsconfig.json:
compilerOptions (Compiler Settings)
“esModuleInterop”: true
Allows interoperability between ES6 modules and CommonJS modules, enabling seamless imports.
“target”: “es5”
Specifies that the compiled JavaScript should be compatible with ECMAScript 5 (older browsers and environments).
“lib”: [“es5”, “dom”]
Includes support for ES5 and browser-specific APIs (DOM), ensuring compatibility with Cypress test scripts.
“types”: [“cypress”, “node”]
Adds TypeScript definitions for Cypress and Node.js, preventing type errors in test scripts.
include (Files Included for Compilation)
**/*.ts
Ensures that all TypeScript files in the project directory are included in compilation.
The package.json file is a key component of a Cypress-based test automation framework that defines project metadata, dependencies, scripts, and configurations. It helps manage all the required libraries and tools needed for running, reporting, and processing test cases efficiently.
Key Components of package.json:
Project Metadata
“name”: “spurtype” → Defines the project name.
“version”: “1.0.0” → Specifies the current project version.
“description”: “Cypress With TypeScript” → Describes the purpose of the project.
Scripts (Commands for Running Tests & Reports)
“scr”: “node cucumber-html-report.js”
Runs a script to generate a Cucumber HTML report.
“coms”: “cucumber-json-formatter –help”
Displays help information for Cucumber JSON formatter.
“api”: “./node_modules/.bin/cypress-tags run -e TAGS=@api”
Executes Cypress tests tagged as API tests (@api).
“smoke”: “./node_modules/.bin/cypress-tags run -e TAGS=@smoke”
Executes smoke tests (@smoke) using Cypress.
“smoke4”: “cypress run –env allure=true,TAGS=@smoke1”
Runs a specific set of smoke tests (@smoke1) while enabling Allure reporting.
This script generates a Cucumber HTML report from JSON test results using the multiple-cucumber-html-reporter package. It extracts test execution details, including browser, platform, and environment metadata, and saves the output as an HTML file for easy visualization of test results in Cypress and TypeScript Automation.
The script requires the package to process JSON reports and generate an interactive HTML report.
Configuration Options
jsonDir → Specifies the location of Cucumber-generated JSON reports.
reportPath → Sets the directory where the HTML report will be saved.
reportName → Defines a custom name for the report file.
pageTitle → Sets the title of the generated HTML report page.
displayDuration → Enables duration display for each test case execution.
openReportInBrowser → Automatically opens the HTML report after generation.
Metadata Section
Browser: Specifies the test execution browser and version.
Device: Identifies the test execution machine.
Platform: Defines the operating system used for testing.
Custom Data Section
Provides additional test details such as Project Name, Test Environment, Execution Time, and Tester Information.
Cypress-cucumber-preprocessor.json
This JSON configuration file is primarily used to manage the Cypress Cucumber preprocessor settings. It enables JSON logging, message output, and HTML report generation, and it specifies the location of step definition files.
Specifies the directory where step definition files are located. These files contain the implementation for Gherkin feature file steps.
Conclusion:
Cypress and TypeScript together create a powerful and efficient framework for both web applications and API automation. By leveraging Cypress’s fast execution and robust automation capabilities alongside TypeScript’s strong typing and code scalability, we can build reliable, maintainable, and scalable test suites.
With features like Cucumber BDD integration, JSON reporting, HTML test reports, and API automation utilities, Cypress enables seamless test execution, while TypeScript enhances code quality, error handling, and developer productivity. The structured approach of defining page objects, API utilities, and configuration files ensures a well-organized framework that is both flexible and efficient.
As automation testing continues to evolve, integrating Cypress with TypeScript proves to be a future-ready solution for modern software testing needs. Whether it’s UI automation, API validation, or end-to-end testing, this dynamic combination offers speed, accuracy, and maintainability, making it an essential choice for testing high-quality web applications.