Integrating Google Lighthouse with Playwright; Picture this: Your development team just shipped a major feature update. The code passed all functional tests. QA signed off. Everything looks perfect in staging. You hit deploy with confidence.
Then the complaints start rolling in.
“The page takes forever to load.” “Images are broken on mobile.” “My browser is lagging.”
Sound familiar? According to Google, 53% of mobile users abandon sites that take longer than 3 seconds to load. Yet most teams only discover performance issues after they’ve reached production, when the damage to user experience and brand reputation is already done.
The real problem isn’t that teams don’t care about performance. It’s that performance testing is often manual, inconsistent, and disconnected from the development workflow. Performance degradation is gradual. It sneaks up on you. And by the time you notice, you’re playing catch-up instead of staying ahead.
The Gap Between Awareness and Action
Most engineering teams know they should monitor web performance. They’ve heard about Core Web Vitals, Time to Interactive, and First Contentful Paint. They understand that performance impacts SEO rankings, conversion rates, and user satisfaction.
But knowing and doing are two different things.
The challenge lies in making performance testing continuous, automated, and actionable. Manual audits are time-consuming and prone to human error. They create bottlenecks in the release pipeline. What teams need is a way to bake performance testing directly into their automation frameworks to treat performance as a first-class citizen alongside functional testing.
Enter Google Lighthouse.
What Is Google Lighthouse?
Google Lighthouse is an open-source, automated tool designed to improve the quality of web pages. Originally developed by Google’s Chrome team, Lighthouse has become the industry standard for web performance auditing by Integrating Google Lighthouse with Playwright.
But here’s what makes Lighthouse truly powerful: it doesn’t just measure performance it provides actionable insights.
When you run a Lighthouse audit, you get comprehensive scores across five key categories:
Performance: Load times, rendering metrics, and resource optimization
Accessibility: ARIA attributes, color contrast, semantic HTML
Best Practices: Security, modern web standards, browser compatibility
SEO: Meta tags, mobile-friendliness, structured data
Progressive Web App: Service workers, offline functionality, installability
Each category receives a score from 0 to 100, with detailed breakdowns of what’s working and what needs improvement. The tool analyzes critical metrics like:
First Contentful Paint (FCP): When the first content renders
Largest Contentful Paint (LCP): When the main content is visible
Total Blocking Time (TBT): How long the page is unresponsive
Cumulative Layout Shift (CLS): Visual stability during load
Speed Index: How quickly content is visually populated
These metrics align directly with Google’s Core Web Vitals the signals that impact search rankings and user experience.
Why Performance Can’t Be an Afterthought
Let’s talk numbers, because performance isn’t just a technical concern it’s a business imperative.
Amazon found that every 100ms of latency cost them 1% in sales. Pinterest increased sign-ups by 15% after reducing perceived wait time by 40%. The BBC discovered they lost an additional 10% of users for every extra second their site took to load.
The data is clear: performance directly impacts your bottom line.
But beyond revenue, there’s the SEO factor. Since 2021, Google has used Core Web Vitals as ranking signals. Sites with poor performance scores get pushed down in search results. You could have the most comprehensive content in your niche, but if your LCP is above 4 seconds, you’re losing visibility.
The question isn’t whether performance matters. The question is: how do you ensure performance doesn’t degrade as your application evolves?
The Power of Integration: Lighthouse Meets Automation
This is where the magic happens when you integrate Google Lighthouse into your automation frameworks.
By Integrating Google Lighthouse with Playwright, Selenium, or Cypress, you transform performance from a periodic manual check into a continuous, automated quality gate.
Here’s what this integration delivers:
1. Consistency Across Environments
Automated Lighthouse tests run in controlled environments with consistent configurations, giving you reliable, comparable data across test runs.
2. Early Detection of Performance Regressions
Instead of discovering performance issues in production, you catch them during development. A developer adds a large unoptimized image? The Lighthouse test fails before the code merges.
3. Performance Budgets and Thresholds
You can set specific performance budgets for example, “Performance score must be above 90.” If a change violates these budgets, the build fails, just like a failing functional test.
4. Comprehensive Reporting
Lighthouse generates detailed HTML and JSON reports with visual breakdowns, diagnostic information, and specific recommendations. These reports become part of your test artifacts.
How Integration Works: A High-Level Flow
You don’t need to be a performance expert to integrate Lighthouse into your automation framework. The process is straightforward and fits naturally into existing testing workflows.
Step 1: Install Lighthouse Lighthouse is available as an npm package, making it easy to add to any Node.js-based automation project. It integrates seamlessly with popular frameworks.
Step 2: Configure Your Audits Define what you want to test which pages, which metrics, and what thresholds constitute a pass or fail. You can customize Lighthouse to focus on specific categories or run full audits across all five areas.
Step 3: Integrate with Your Test Suite Add Lighthouse audits to your existing test files. Your automation framework handles navigation and setup, then hands off to Lighthouse for the performance audit. The results come back as structured data you can assert against.
Step 4: Set Performance Budgets Define acceptable thresholds for key metrics. These become your quality gates if performance drops below the threshold, the test fails and the pipeline stops.
Step 5: Generate and Store Reports Configure Lighthouse to generate HTML and JSON reports. Store these as test artifacts in your CI/CD system, making them accessible for review and historical analysis.
Step 6: Integrate with CI/CD Run Lighthouse tests as part of your continuous integration pipeline. Every pull request, every deployment performance gets validated automatically.
The beauty of this approach is that it requires minimal changes to your existing workflow. You’re not replacing your automation framework you’re enhancing it with performance capabilities.
Practical Implementation: Code Examples
Let’s look at how this works in practice with a real Playwright automation framework. Here’s how you can create a reusable Lighthouse runner:
Feature: Integrating Google Lighthouse with the Test Automation Framework
This feature leverages Google Lighthouse to evaluate the performance,
accessibility, SEO, and best practices of web pages.
@test
Scenario: Validate the Lighthouse Performance Score for the Playwright Official Page
Given I navigate to the Playwright official website
When I initiate the Lighthouse audit
And I click on the "Get started" button
And I wait for the Lighthouse report to be generated
Then I generate the Lighthouse report
Decoding Lighthouse Reports: What the Data Tells You
Lighthouse reports are information-rich, but they’re designed to be actionable, not overwhelming. Let’s break down what you get:
The Performance Score
This is your headline number a weighted average of key performance metrics. A score of 90-100 is excellent, 50-89 needs improvement, and below 50 requires immediate attention.
Metric Breakdown
Each performance metric gets its own score and timing. You’ll see exactly how long FCP, LCP, TBT, CLS, and Speed Index took, color-coded to show if they’re in the green, orange, or red zone.
Opportunities
This section is gold. Lighthouse identifies specific optimizations that would improve performance, ranked by potential impact. “Eliminate render-blocking resources” might save 2.5 seconds. “Properly size images” could save 1.8 seconds. Each opportunity includes technical details and implementation guidance.
Diagnostics
These are additional insights that don’t directly impact the performance score but highlight areas for improvement things like excessive DOM size, unused JavaScript, or inefficient cache policies.
Passed Audits
Don’t ignore these! They show what you’re doing right, which is valuable for understanding your performance baseline and maintaining good practices.
Accessibility and SEO Insights
Beyond performance, you get actionable feedback on accessibility issues (missing alt text, poor color contrast) and SEO problems (missing meta descriptions, unreadable font sizes on mobile).
The JSON output is equally valuable for programmatic analysis. You can extract specific metrics, track them over time, and build custom dashboards or alerts based on performance trends.
Real-World Impact
Let’s look at practical scenarios where this integration delivers measurable value:
E-Commerce Platform
An online retailer integrated Lighthouse into their Playwright test suite, running audits on product pages and checkout flows. They set a performance budget requiring scores above 90. Within three months, they caught 14 performance regressions before production, including a third-party analytics script blocking rendering.
A B2B SaaS company added Lighthouse audits to their test suite, focusing on dashboard interfaces. They discovered their data visualization library was causing significant Total Blocking Time. The Lighthouse diagnostics pointed them to specific JavaScript bundles needing code-splitting.
Result: Reduced TBT by 60%, improving perceived responsiveness and reducing support tickets.
Content Publisher
A media company integrated Lighthouse into their deployment pipeline, auditing article pages with strict accessibility and SEO thresholds. This caught issues like missing alt text, poor heading hierarchy, and oversized media files.
Result: Improved SEO rankings, increased organic traffic by 23%, and ensured WCAG compliance.
The Competitive Advantage
Here’s what separates high-performing teams from the rest: they treat performance as a feature, not an afterthought.
By integrating Google Lighthouse with Playwright or any other automation framework, you’re building a culture of performance awareness. Developers get immediate feedback on the performance impact of their changes. Stakeholders get clear, visual reports demonstrating the business value of optimization work.
You shift from reactive firefighting to proactive prevention. Instead of scrambling to fix performance issues after users complain, you prevent them from ever reaching production.
Getting Started
You don’t need to overhaul your entire testing infrastructure. Start small:
Pick one critical user journey maybe your homepage or checkout flow
Add a single Lighthouse audit to your existing test suite
Set a baseline by running the audit and recording current scores
Define one performance budget perhaps a performance score above 80
Integrate it into your CI/CD pipeline so it runs automatically
From there, you can expand add more pages, tighten thresholds, incorporate additional metrics. The key is to start building that performance feedback loop.
Conclusion: Performance as a Continuous Practice
Integrating Google Lighthouse with Playwright; Web performance isn’t a one-time fix. It’s an ongoing commitment that requires visibility, consistency, and automation. Google Lighthouse provides the measurement and insights. Your automation framework provides the execution and integration. Together, they create a powerful system for maintaining and improving web performance at scale.
The teams that win in today’s digital landscape are those that make performance testing as routine as functional testing. They’re the ones catching regressions early, maintaining high standards, and delivering consistently fast experiences to their users.
The question is: will you be one of them?
Would you be ready to boost your web performance? You can start by integrating Google Lighthouse into your automation framework today. Your users and your bottom line will thank you.
Here’s a scenario that plays out in QA teams everywhere:
A tester spends 45 minutes manually writing test cases for a new feature. Another tester, working on the same type of feature, finishes in 12 minutes with better coverage, clearer scenarios, and more edge cases identified.
What’s the difference? Experience isn’t the deciding factor, and tools alone don’t explain it either. The real advantage comes from how they communicate with intelligent systems using effective QA Prompting Tips.
The testing world is changing more rapidly than we realise. Today, every QA engineer interacts with AI-powered tools, whether generating test cases, validating user stories, analysing logs, or debugging complex issues. But here’s the uncomfortable truth: most testers miss out on 80% of the value simply because they don’t know how to ask the right questions—especially when applying the right QA Prompting Tips.
That’s where prompting comes in.
Prompting isn’t about typing fancy commands or memorising templates. It’s about asking the right questions, in the right context, at the right time. It’s a skill that multiplies your testing expertise rather than replacing it.
Think of it this way: You wouldn’t write a bug report that just says “Login broken.” You’d provide steps to reproduce, expected vs. actual results, environment details, and severity. The same principle applies to prompting—specificity and structure determine quality, particularly when creating tests with QA Prompting Tips.
In this article, we’ll break down 10 simple yet powerful prompting secrets that can transform your day-to-day testing from reactive to strategic, from time-consuming to efficient, and from good to exceptional.
1. Context Is Everything
If you ask something vague, you’ll get vague answers. It’s that simple.
Consider these two prompts:
❌ Bad Prompt: “Write test cases for login.”
✅ Good Prompt: “You are a QA engineer for a healthcare application that handles sensitive patient data and must comply with HIPAA regulations. Write 10 test cases for the login module, focusing on data privacy, security vulnerabilities, session management, and multi-factor authentication.”
The difference? Context transforms generic output into actionable testing artifacts.
The first prompt might give you basic username/password validation scenarios. The second gives you security-focused test cases that consider regulatory compliance, session timeout scenarios, MFA edge cases, and data encryption validation, exactly what a healthcare app needs.
Why Context Matters
When you provide real-world details, AI tools can:
Align responses with your specific domain (fintech, healthcare, e-commerce)
Key Takeaway: Always include the “where” and “why” before the “what.” Context makes your prompts intelligent, not just informative, and serves as the foundation for effective QA Prompting Tips.
2. Define the Role Before the Task
Before you ask for anything, define what the system should think like. This single technique can elevate responses from junior-level to expert-level instantly.
✅ Effective Role Definition: “You are a senior QA engineer with 8 years of experience in exploratory testing and API validation. Review this user story and identify potential edge cases, security vulnerabilities, and performance bottlenecks.”
By assigning a role, you’re setting the expertise level, perspective, and focus area. The response shifts from surface-level observations to nuanced, experience-driven insights.
Role Examples for Different Testing Needs
For test case generation: “You are a detail-oriented QA analyst specializing in boundary value analysis…”
For bug analysis: “You are a senior test engineer experienced in root cause analysis…”
For automation: “You are a test automation architect with expertise in framework design…”
For performance: “You are a performance testing specialist, an expert in load testing methodologies and tools.”
Key Takeaway: Assign a role first, then give the task. It fundamentally changes the quality and depth of what you receive.
3. Structure the Output
QA engineers thrive on structured tables, columns, and clear formats. So ask for it explicitly.
✅ Structured Prompt: “Generate 10 test cases for the password reset feature in a table format with columns for: Test Case ID, Test Scenario, Pre-conditions, Test Steps, Expected Result, Actual Result, and Priority (High/Medium/Low).”
This gives you something that’s immediately copy-ready for Jira, TestRail, Zephyr, SpurQuality, or any test management tool. No reformatting. No cleanup. Just actionable test documentation.
Structure Options
Depending on your need, you can request:
Tables for test cases and test data
Numbered lists for test execution steps
Bullet points for quick scenario summaries
JSON/XML for API test data
Markdown for documentation
Gherkin syntax for BDD scenarios
Key Takeaway: Structured prompts produce structured results. Define the format, and you’ll save hours of manual reformatting.
4. Add Clear Boundaries
Boundaries create focus and prevent scope creep in your results.
✅ Bounded Prompt: “Generate exactly 8 test cases for the search functionality: 3 positive scenarios, 3 negative scenarios, and 2 edge cases. Focus only on the basic search feature, excluding advanced filters.”
This approach ensures you get:
The exact quantity you need (no overwhelming lists)
Scope: “Focus only on the checkout process, not the entire cart.”
Test types: “Only functional tests, no performance scenarios”
Priority: “High and medium priority only”
Platforms: “Web application only, exclude mobile”
Key Takeaway: Constraints keep your output precise, relevant, and actionable. They prevent information overload and maintain focus.
5. Build Step by Step (Prompt Chaining)
Just as QA processes are iterative, effective prompting follows a similar pattern. Instead of asking for everything at once, break it into logical steps.
Example Prompt Chain
Step 1:
“Analyze this user story and summarize the key functional requirements in 3-4 bullet points.”
Step 2:
“Based on those requirements, create 5 high-level test scenarios covering happy path, error handling, and edge cases.”
Step 3:
“Expand the second scenario into detailed test steps with expected results.”
Step 4:
“Identify potential automation candidates from these scenarios and explain why they’re suitable for automation.”
This layered approach produces clear, logical, and well-thought-out results. Each step builds on the previous one, creating a coherent testing strategy rather than disconnected outputs.
Key Takeaway: Prompt chaining mirrors your testing mindset. It’s iterative, logical, and produces higher-quality results than single-shot prompts.
6. Use Prompts for Reviews, Not Just Creation
Don’t limit AI tools to creation tasks; leverage them as your review partner.
Review Prompt Examples
✅ Test Case Review: “Review these 10 test cases for the payment gateway. Identify any missing scenarios, redundant steps, or unclear expected results.”
✅ Bug Report Quality Check: “Analyze this bug report and suggest improvements to make it clearer for developers. Focus on reproducibility, clarity, and completeness.”
✅ Test Summary Comparison: “Compare these two test execution summary reports and highlight which one communicates results more effectively to stakeholders.”
✅ Documentation Review: “Review this test plan and identify sections that lack clarity or need more detail.”
This transforms your workflow from one-directional (you create, you review) to collaborative (AI assists in both creation and quality assurance).
Key Takeaway: Use AI as your review partner, not just your assistant. It catches what you might miss and improves overall quality.
7. Use Real Scenarios and Data
Generic prompts produce generic results. Feed real test data, actual API responses, or specific scenarios for practical insights.
✅ Real-Data Prompt: “Here’s the actual API response from our login endpoint: {‘status’: 200, ‘token’: null, ‘message’: ‘Success’}. Even though the status is 200 and the message is success, this is causing authentication failures. What could be the root cause, and what test scenarios should I add to catch this in the future?”
This gives you:
Specific debugging insights based on actual data
Relevant test scenarios tied to real issues
Actionable recommendations, not theoretical advice
When to Use Real Data
Debugging: Paste actual logs, error messages, or API responses
Test data generation: Provide sample data formats
Scenario validation: Share actual user workflows
Regression analysis: Include historical bug patterns
Key Takeaway: Realistic inputs produce realistic testing insights. The more specific your input, the more valuable your output.
Note: Be cautious about the data you send to the AI model; it might be used for their training purpose. Always prefer a purchased subscription with a data privacy policy.
8. Set the Quality Bar
If you want a particular tone, standard, or level of professionalism, specify it upfront.
✅ Quality-Defined Prompts:
“Write concise, ISTQB-style test scenarios for the mobile registration flow using standard testing terminology.”
“Generate a bug report following IEEE 829 standards with proper severity classification and detailed reproduction steps.”
“Create BDD scenarios in Gherkin syntax following best practices for Given-When-Then structure.”
This instantly elevates the tone, structure, and professionalism of the output. You’re not getting casual descriptions, you’re getting industry-standard documentation.
Quality Standards to Reference
ISTQB for test case terminology
IEEE 829 for test documentation
Gherkin/BDD for behaviour-driven scenarios
ISO 25010 for quality characteristics
OWASP for security testing
Key Takeaway: Define the tone and quality standard upfront. It ensures outputs align with professional testing practices.
9. Refine and Iterate
Just like debugging, your first prompt won’t be perfect. And that’s okay.
After getting an initial result, refine it with follow-up prompts:
Initial Prompt: “Generate test cases for user registration.”
Refinement Prompts:
✅ “Add data validation scenarios for email format and password strength.”
✅ “Rank these test cases by priority based on business impact.”
✅ “Include estimated effort for each test case (Small/Medium/Large).”
✅ “Add a column for automation feasibility.”
Each iteration moves you from good to great. You’re sculpting the output to match your exact needs.
Iteration Strategies
Add missing elements: “Include security test scenarios”
Adjust scope: “Remove low-priority cases and add more edge cases”
Change format: “Convert this to Gherkin syntax”
Enhance detail: “Expand test steps with more specific actions”
Key Takeaway: Refinement is where you move from good to exceptional. Don’t settle for the first output iteration until it’s exactly what you need.
10. Ask for Prompt Feedback
Here’s a meta-technique: You can ask AI to improve your own prompts.
✅ Meta-Prompt Example: “Here’s the prompt I’m using to generate API test cases: [your prompt]. Analyze it and suggest how to make it more specific, QA-focused, and likely to produce better test scenarios.”
The system will reword, optimize, and enhance your prompt automatically. It’s like having a prompt coach.
What to Ask For
“How can I make this prompt more specific?”
“What context am I missing that would improve the output?”
“Rewrite this prompt to be more structured and clear.”
“What role definition would work best for this testing task?”
Key Takeaway: Always review and optimize your own prompts just like you’d review your test cases. Continuous improvement applies to prompting, too.
The QA Prompting Pyramid: A Framework for Mastery
Think of effective prompting as a pyramid. Each level builds on the previous one, creating a foundation for expert-level results.
Level
Principle
Focus
Impact
🧱 Base
Context
Relevance
Ensures outputs match your domain and needs
🎭 Level 2
Role Definition
Perspective
Elevates expertise level of responses
📋 Level 3
Structure
Clarity
Makes outputs immediately usable
🎯 Level 4
Constraints
Precision
Prevents scope creep and information overload
🪜 Level 5
Iteration
Refinement
Transforms good outputs into exceptional ones
🧠 Apex
Self-Improvement
Mastery
Continuously optimizes your prompting skills
Start at the base and work your way up. Master each level before moving to the next. By the time you reach the apex, prompting becomes second nature, a natural extension of your testing expertise.
Real-World Impact: How Prompting Transforms QA Work
Let’s look at practical scenarios where these techniques deliver measurable results:
Test Case Generation
A QA team at a fintech company used structured prompting to generate test cases for a new payment feature. By providing context (PCI-DSS compliance), defining roles (security-focused QA), and setting boundaries (20 test cases covering security, functionality, and edge cases), they reduced test case creation time from 3 hours to 25 minutes while improving coverage by 40%. This type of improvement becomes even more powerful when teams apply effective QA Prompting Tips in their workflows.
Bug Analysis and Root Cause Investigation
A tester struggling with an intermittent bug used real API response data in their prompt, asking for potential root causes and additional test scenarios. Within minutes, they identified a race condition that would have taken hours to debug manually.
Test Automation Strategy
An automation engineer used prompt chaining to develop a framework strategy starting with requirements analysis, moving to tool selection, then architecture design, and finally implementation priorities. The structured approach created a comprehensive automation roadmap in one afternoon.
Documentation Review
A QA lead used review prompts to analyze test plans before stakeholder presentations. The AI identified unclear sections, missing risk assessments, and inconsistent terminology issues that would have surfaced during the actual presentation.
The Competitive Advantage: Why This Matters Now
Here’s the reality: AI won’t replace testers, but testers who know how to prompt will replace those who don’t.
This isn’t about job security, it’s about effectiveness. The QA engineers who master prompting will:
Deliver faster without sacrificing quality
Think more strategically by offloading routine tasks
Catch more issues through comprehensive scenario generation
Communicate better with clearer documentation and reports
Stay relevant as testing evolves
Prompting is becoming as fundamental to QA as writing test cases or understanding requirements. It’s not a nice-to-have skill; it’s a must-have multiplier.
Getting Started: Your First Steps
You don’t need to master all 10 techniques overnight. Start small and build momentum:
First Week: Foundation
Practice adding context to every prompt
Define roles before tasks
Track the difference in output quality
Second Week: Structure
Request structured outputs (tables, lists)
Set clear boundaries on scope and quantity
Compare structured vs. unstructured results
Third Week: Advanced
Try prompt chaining for complex tasks
Use prompts for review and feedback
Experiment with real data and scenarios
Fourth Week: Mastery
Set quality standards in your prompts
Iterate and refine outputs
Ask for feedback on your own prompts
The key is consistency. Use these techniques daily, even for small tasks. Over time, they become instinctive.
Conclusion: Prompting as a Core QA Skill
Smart prompting is quickly becoming a core competency for QA professionals. It doesn’t replace your testing expertise; it multiplies it, especially when you use the right QA Prompting Tips.
When you apply these 10 techniques, you’ll notice how your test cases become more comprehensive, your bug reports clearer, your scenario planning sharper, and your overall productivity significantly higher. These improvements happen faster when you incorporate effective QA Prompting Tips into your daily workflow.
Remember this simple truth:
“The best testers aren’t those who work harder; they’re those who work smarter by asking better questions.”
So start today. Pick one or two of these techniques and apply them to your next testing task. Notice the difference. Refine your approach. And watch as your testing workflow transforms from reactive to strategic with the help of QA Prompting Tips.
The future of QA isn’t about replacing human intelligence with artificial intelligence. It’s about augmenting human expertise with intelligent tools, and prompting is the bridge between the two.
Your Next Steps
If you found these techniques valuable:
Share this article with your QA team and start a conversation about prompting best practices
Bookmark this guide and reference it when crafting your next prompt
Try one technique today, pick the easiest one, and apply it to your current task
Drop a comment below. What’s your go-to prompt that saves you time? What challenges do you face with prompting?
Follow for more. We’ll be publishing guides on advanced prompt patterns, AI-driven test automation, and QA productivity hacks
Your prompting journey starts with a single, well-crafted question. Make it count.
API Automation Testing Framework – In Today’s fast-paced digital ecosystem, almost every modern application relies on APIs (Application Programming Interfaces) to function seamlessly. Whether it’s a social media integration pulling live updates, a payment gateway processing transaction, or a data service exchanging real-time information, APIs act as the invisible backbone that connects various systems together.
Because APIs serve as the foundation of all interconnected software, ensuring that they are reliable, secure, and high performing is absolutely critical. Even a minor API failure can impact multiple dependent systems; consequently, it may cause application downtime, data mismatches, or even financial loss.
That’s where API automation testing framework comes in. Unlike traditional UI testing, API testing validates the core business logic directly at the backend layer, which makes it faster, more stable, and capable of detecting issues early in the development cycle — even before the frontend is ready.
In this blog, we’ll walk through the process of building a complete API Automation Testing Framework using a combination of:
Java – as the main programming language
Maven – for project and dependency management
Cucumber – to implement Behavior Driven Development (BDD)
RestAssured – for simplifying RESTful API automation
Playwright – to handle browser-based token generation
The framework you’ll learn to build will follow a BDD (Behavior-Driven Development) approach, enabling test scenarios to be written in simple, human-readable language. This not only improves collaboration between developers, testers, and business analysts but also makes test cases easier to understand, maintain, and extend.
Additionally, the API automation testing framework will be CI/CD-friendly, meaning it can be seamlessly integrated into automated build pipelines for continuous testing and faster feedback.
By the end of this guide, you’ll have a scalable, reusable, and maintainable API testing framework that brings together the best of automation, reporting, and real-time token management — a complete solution for modern QA teams.
What is API?
An API (Application Programming Interface) acts as a communication bridge between two software systems, allowing them to exchange information in a standardized way. In simpler terms, it defines how different software components should interact — through a set of rules, protocols, and endpoints.
Think of an API as a messenger that takes a request from one system, delivers it to another system, and then brings back the response. This interaction, therefore, allows applications to share data and functionality without exposing their internal logic or database structure.
Let’s take a simple example: When you open a weather application on your phone, it doesn’t store weather data itself. Instead, it sends a request to a weather server API, which processes the request and sends back a response — such as the current temperature, humidity, or forecast. This request-response cycle is what makes APIs so powerful and integral to almost every digital experience we use today.
Most modern APIs follow the REST (Representational State Transfer) architectural style. REST APIs use the HTTP protocol and are designed around a set of standardized operations, including:
HTTP Method
Description
Example Use
GET
Retrieve data from the server
Fetch a list of users
POST
Create new data on the server
Add a new product
PUT
Update existing data
edit user details
DELETE
Remove data
Delete a record
The responses returned by API’s are typically in JSON (JavaScript Object Notation) format – a lightweight, human-readable, and machine-friendly data format that’s easy to parse and validate.
In essence, API’s are the digital glue that holds modern applications together — enabling smooth communication, faster integrations, and a consistent flow of information across systems.
What is API Testing?
API Testing is the process of verifying that an API functions correctly and performs as expected — ensuring that all its endpoints, parameters, and data exchanges behave according to defined business rules.
In simple terms, it’s about checking whether the backend logic of an application works properly — without needing a graphical user interface (UI). Since APIs act as the communication layer between different software components, testing them helps ensure that the entire system remains reliable, secure, and efficient.
API testing typically focuses on four main aspects:
Functionality – Does the API perform the intended operation and return the correct response for valid requests?
Reliability – Does it deliver consistent results every time, even under different inputs and conditions?
Security – Is the API protected from unauthorized access, data leaks, or token misuse?
Performance – Does it respond quickly and remain stable under heavy load or high traffic?
Unlike traditional UI testing, which validates the visual and interactive parts of an application, API testing operates directly at the business logic layer. This makes it:
Faster – Since it bypasses the UI, execution times are much shorter.
More Stable – UI changes (like a button name or layout) don’t affect API tests.
Proactive – Tests can be created and run even before the front-end is developed.
In essence, API testing ensures the heart of your application is healthy. By validating responses, performance, and security at the API level, teams can detect defects early, reduce costs, and deliver more reliable software to users.
Why is API Testing Important?
API Testing plays a vital role in modern software development because APIs form the backbone of most applications. A failure in an API can affect multiple systems and impact overall functionality.
Here’s why API testing is important:
Ensures Functionality: Verifies that endpoints return correct responses and handle errors properly.
Enhances Security: Detects vulnerabilities like unauthorized access or token misuse.
Validates Data Integrity: Confirms that data remains consistent across APIs and databases.
Improves Performance: Checks response time, stability, and behavior under load.
Detects Defects Early: Allows early testing right after backend development, saving time and cost
Supports Continuous Integration: Easily integrates with CI/CD pipelines for automated validation.
In short, API testing ensures your system’s core logic is reliable, secure, and ready for real-world use.
Tools for Manual API Testing
Before jumping into automation, it’s essential to explore and understand APIs manually. Manual testing helps you validate endpoints, check responses, and get familiar with request structures.
Here are some popular tools used for manual API testing:
Postman: The most widely used tool for sending API requests, validating responses, and organizing test collections [refer link – https://www.postman.com/.
SoapUI: Best suited for testing both SOAP and REST APIs with advanced features like assertions and mock services.
Insomnia: A lightweight and user-friendly alternative to Postman, ideal for quick API exploration.
cURL: A command-line tool perfect for making fast API calls or testing from scripts.
Fiddler: Excellent for capturing and debugging HTTP/HTTPS traffic between client and server.
Using these tools helps testers understand API behavior, request/response formats, and possible edge cases — forming a strong foundation before moving to API automation.
Tools for API Automation Testing
After verifying APIs manually, the next step is to automate them using reliable tools and libraries. Automation helps improve test coverage, consistency, and execution speed.
Here are some popular tools used for API automation testing:
RestAssured: A powerful Java library designed specifically for testing and validating RESTful APIs.
Cucumber: Enables writing test cases in Gherkin syntax (plain English), making them easy to read and maintain.
Playwright: Automates browser interactions; in our framework, it will be used for token generation or authentication flows.
Postman + Newman: Allows you to run Postman collections directly from the command line — ideal for CI/CD integration.
JMeter: A robust tool for performance and load testing of APIs under different conditions.
In this blog, our focus will be on building a framework using RestAssured, Cucumber, and Playwright — combining functional, BDD, and authentication automation into one cohesive setup.
Framework Overview
We’ll build a Behavior-Driven API Automation Testing Framework that combines multiple tools for a complete testing solution. Here’s how each component fits in:
Cucumber – Manages the BDD layer, allowing test scenarios to be written in simple, readable feature files.
RestAssured – Handles HTTP requests and responses for validating RESTful APIs.
Playwright – Automates browser-based actions like token generation or authentication.
Maven – Manages project dependencies, builds, and plugins efficiently.
Cucumber HTML Reports – Automatically generates detailed execution reports after each run.
The framework follows a modular structure, with separate packages for step definitions, utilities, configurations, and feature files — ensuring clean organization, easy maintenance, and scalability.
In this, we will be creating a feature file for API Automation Testing Framework. A feature file consists of steps. These steps are mentioned in the gherkin language. The feature is easy to understand and can be written in the English language so that a non-technical person can understand the flow of the test scenario. In this framework we will be automating the four basic API request methods i.e. POST, PUT, GET and DELETE.
We can assign tags to our scenarios mentioned in the feature file to run particular test scenarios based on the requirement. The key point you must notice here is the feature file should end with .feature extension. We will be creating four different scenarios for the four different API methods.
Feature: All Notes API Validation
@api
Scenario Outline: Validate POST Create Notes API Response for "<scenarioName>" Scenario
When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>"
Then User verifies the response status code is <statusCode>
And User verifies the response body matches JSON schema "<schemaFile>"
Then User verifies fields in response: "<contentType>" with content type "<fields>"
Examples:
| scenarioName | method | url | headers | queryFile | bodyFile | statusCode | schemaFile | contentType | fields |
| Valid create Notes | POST | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes | NA | NA | Create_Notes_Request | 200 | NA | NA | NA |
Scenario Outline: Validate GET Notes API Response for "<scenarioName>" Scenario
When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>"
Then User verifies the response status code is <statusCode>
And User verifies the response body matches JSON schema "<schemaFile>"
Then User verifies fields in response: "<contentType>" with content type "<fields>"
Examples:
| scenarioName | method | url | headers | queryFile | bodyFile | statusCode | schemaFile | contentType | fields |
| Valid Get Notes | GET | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes | NA | NA | NA | 200 | Notes_Schema_200 | json | note=This is Note 1 |
Scenario Outline: Validate Update Notes API Response for "<scenarioName>" Scenario
When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>"
Then User verifies the response status code is <statusCode>
And User verifies the response body matches JSON schema "<schemaFile>"
Then User verifies fields in response: "<contentType>" with content type "<fields>"
Examples:
| scenarioName | method | url | headers | queryFile | bodyFile | statusCode | schemaFile | contentType | fields |
| Valid update Notes | PUT | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes/{noteId}/update-notes | NA | NA | Update_Notes_Request | 200 | NA | NA | NA |
Scenario Outline: Validate DELETE Create Notes API Response for "<scenarioName>" Scenario
When User sends "<method>" request to "<url>" with headers "<headers>" and query file "<queryFile>" and requestDataFile "<bodyFile>"
Then User verifies the response status code is <statusCode>
And User verifies the response body matches JSON schema "<schemaFile>"
Then User verifies fields in response: "<contentType>" with content type "<fields>"
Examples:
| scenarioName | method | url | headers | queryFile | bodyFile | statusCode | schemaFile | contentType | fields |
| Valid delete | DELETE | /api/v1/loan-syndications/{dealId}/investors/{investorId}/notes/{noteId} | NA | NA | NA | 200 | NA | NA | NA |
Step 4: Creating a Step Definition File
Unlike the automation framework which we have built in the previous blog, we will be creating a single-step file for all the feature files. In the BDD framework, the step files are used to map and implement the steps described in the feature file. Rest Assured library is very accurate to map the steps with the steps described in the feature file. We will be describing the same steps in the step file as they have described in the feature file so that behave will come to know the step implementation for the particular steps present in the feature file.
package org.Spurqlabs.Steps;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import io.restassured.response.Response;
import org.Spurqlabs.Core.TestContext;
import org.Spurqlabs.Utils.*;
import org.json.JSONArray;
import org.json.JSONObject;
import java.io.File;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashMap;
import java.util.Map;
import static io.restassured.module.jsv.JsonSchemaValidator.matchesJsonSchemaInClasspath;
import static org.Spurqlabs.Utils.DealDetailsManager.replacePlaceholders;
import static org.hamcrest.Matchers.equalTo;
public class CommonSteps extends TestContext {
private Response response;
@When("User sends {string} request to {string} with headers {string} and query file {string} and requestDataFile {string}")
public void user_sends_request_to_with_query_file_and_requestDataFile (String method, String url, String headers, String queryFile, String bodyFile) throws IOException {
String jsonString = Files.readString(Paths.get(FrameworkConfigReader.getFrameworkConfig("DealDetails")), StandardCharsets.UTF_8);
JSONObject storedValues = new JSONObject(jsonString);
String fullUrl = FrameworkConfigReader.getFrameworkConfig("BaseUrl") + replacePlaceholders(url);
Map<String, String> header = new HashMap<>();
if (!"NA".equalsIgnoreCase(headers)) {
header = JsonFileReader.getHeadersFromJson(FrameworkConfigReader.getFrameworkConfig("headers") + headers + ".json");
} else {
header.put("cookie", TokenManager.getToken());
}
Map<String, String> queryParams = new HashMap<>();
if (!"NA".equalsIgnoreCase(queryFile)) {
queryParams = JsonFileReader.getQueryParamsFromJson(FrameworkConfigReader.getFrameworkConfig("Query_Parameters") + queryFile + ".json");
for (String key : queryParams.keySet()) {
String value = queryParams.get(key);
for (String storedKey : storedValues.keySet()) {
value = value.replace("{" + storedKey + "}", storedValues.getString(storedKey));
}
queryParams.put(key, value);
}
}
Object requestBody = null;
if (!"NA".equalsIgnoreCase(bodyFile)) {
String bodyTemplate = JsonFileReader.getJsonAsString(
FrameworkConfigReader.getFrameworkConfig("Request_Bodies") + bodyFile + ".json");
for (String key : storedValues.keySet()) {
String placeholder = "{" + key + "}";
if (bodyTemplate.contains(placeholder)) {
bodyTemplate = bodyTemplate.replace(placeholder, storedValues.getString(key));
}
}
requestBody = bodyTemplate;
}
response = APIUtility.sendRequest(method, fullUrl, header, queryParams, requestBody);
response.prettyPrint();
TestContextLogger.scenarioLog("API", "Request sent: " + method + " " + fullUrl);
if (scenarioName.contains("GET Notes") && response.getStatusCode() == 200) {
DealDetailsManager.put("noteId", response.path("[0].id"));
}
}
@Then("User verifies the response status code is {int}")
public void userVerifiesTheResponseStatusCodeIsStatusCode(int statusCode) {
response.then().statusCode(statusCode);
TestContextLogger.scenarioLog("API", "Response status code: " + statusCode);
}
@Then("User verifies the response body matches JSON schema {string}")
public void userVerifiesTheResponseBodyMatchesJSONSchema(String schemaFile) {
if (!"NA".equalsIgnoreCase(schemaFile)) {
String schemaPath = "Schema/" + schemaFile + ".json";
response.then().assertThat().body(matchesJsonSchemaInClasspath(schemaPath));
TestContextLogger.scenarioLog("API", "Response body matches schema");
} else {
TestContextLogger.scenarioLog("API", "Response body does not have schema to validate");
}
}
@Then("User verifies field {string} has value {string}")
public void userVerifiesFieldHasValue(String jsonPath, String expectedValue) {
response.then().body(jsonPath, equalTo(expectedValue));
TestContextLogger.scenarioLog("API", "Field " + jsonPath + " has value: " + expectedValue);
}
@Then("User verifies fields in response: {string} with content type {string}")
public void userVerifiesFieldsInResponseWithContentType(String contentType, String fields) throws IOException {
// If NA, skip verification
if ("NA".equalsIgnoreCase(contentType) || "NA".equalsIgnoreCase(fields)) {
return;
}
String responseStr = response.getBody().asString().trim();
try {
if ("text".equalsIgnoreCase(contentType)) {
// For text, verify each expected value is present in response
for (String expected : fields.split(";")) {
expected = replacePlaceholders(expected.trim());
if (!responseStr.contains(expected)) {
throw new AssertionError("Expected text not found: " + expected);
}
TestContextLogger.scenarioLog("API", "Text found: " + expected);
}
} else if ("json".equalsIgnoreCase(contentType)) {
// For json, verify key=value pairs
JSONObject jsonResponse;
if (responseStr.startsWith("[")) {
JSONArray arr = new JSONArray(responseStr);
jsonResponse = !arr.isEmpty() ? arr.getJSONObject(0) : new JSONObject();
} else {
jsonResponse = new JSONObject(responseStr);
}
for (String pair : fields.split(";")) {
if (pair.trim().isEmpty()) continue;
String[] kv = pair.split("=", 2);
if (kv.length < 2) continue;
String keyPath = kv[0].trim();
String expected = replacePlaceholders(kv[1].trim());
Object actual = JsonFileReader.getJsonValueByPath(jsonResponse, keyPath);
if (actual == null) {
throw new AssertionError("Key not found in JSON: " + keyPath);
}
if (!String.valueOf(actual).equals(String.valueOf(expected))) {
throw new AssertionError("Mismatch for " + keyPath + ": expected '" + expected + "', got '" + actual + "'");
}
TestContextLogger.scenarioLog("API", "Validated: " + keyPath + " = " + expected);
}
} else {
throw new AssertionError("Unsupported content type: " + contentType);
}
} catch (AssertionError | Exception e) {
TestContextLogger.scenarioLog("API", "Validation failed: " + e.getMessage());
throw e;
}
}
Step 5: Creating API
Till now we have successfully created a feature file and a step file now in this step we will be creating a utility file. Generally, in Web automation, we have page files that contain the locators and the actions to perform on the web elements but in this framework, we will be creating a single utility file just like the step file. The utility file contains the API methods and the endpoints to perform the specific action like, POST, PUT, GET, or DELETE. The request body i.e. payload and the response body will be captured using the methods present in the utility file. So the reason these methods are created in the utility file is that we can use them multiple times and don’t have to create the same method over and over again.
package org.Spurqlabs.Utils;
import io.restassured.RestAssured;
import io.restassured.http.ContentType;
import io.restassured.response.Response;
import io.restassured.specification.RequestSpecification;
import java.io.File;
import java.util.Map;
public class APIUtility {
public static Response sendRequest(String method, String url, Map<String, String> headers, Map<String, String> queryParams, Object body) {
RequestSpecification request = RestAssured.given();
if (headers != null && !headers.isEmpty()) {
request.headers(headers);
}
if (queryParams != null && !queryParams.isEmpty()) {
request.queryParams(queryParams);
}
if (body != null && !method.equalsIgnoreCase("GET")) {
if (headers == null || !headers.containsKey("Content-Type")) {
request.header("Content-Type", "application/json");
}
request.body(body);
}
switch (method.trim().toUpperCase()) {
case "GET":
return request.get(url);
case "POST":
return request.post(url);
case "PUT":
return request.put(url);
case "PATCH":
return request.patch(url);
case "DELETE":
return request.delete(url);
default:
throw new IllegalArgumentException("Unsupported HTTP method: " + method);
}
}
Step 6: Create a Token Generation using Playwright
In this step, we automate the process of generating authentication tokens using Playwright. Many APIs require login-based tokens (like cookies or bearer tokens), and managing them manually can be difficult — especially when they expire frequently.
The TokenManager class handles this by:
Logging into the application automatically using Playwright.
Extracting authentication cookies (OauthHMAC, OauthExpires, BearerToken).
Storing the token in a local JSON file for reuse.
Refreshing the token automatically when it expires.
This ensures that your API tests always use a valid token without manual updates, making the framework fully automated and CI/CD ready.
package org.Spurqlabs.Utils;
import java.io.*;
import java.nio.file.*;
import java.time.Instant;
import java.util.HashMap;
import java.util.Map;
import com.google.gson.Gson;
import com.google.gson.reflect.TypeToken;
import com.microsoft.playwright.*;
import com.microsoft.playwright.options.Cookie;
public class TokenManager {
private static final ThreadLocal<String> tokenThreadLocal = new ThreadLocal<>();
private static final ThreadLocal<Long> expiryThreadLocal = new ThreadLocal<>();
private static final String TOKEN_FILE = "token.json";
private static final long TOKEN_VALIDITY_SECONDS = 30 * 60; // 30 minutes
public static String getToken() {
String token = tokenThreadLocal.get();
Long expiry = expiryThreadLocal.get();
if (token == null || expiry == null || Instant.now().getEpochSecond() >= expiry) {
// Try to read from a file (for multi-JVM/CI)
Map<String, Object> fileToken = readTokenFromFile();
if (fileToken != null) {
token = (String) fileToken.get("token");
expiry = ((Number) fileToken.get("expiry")).longValue();
}
// If still null or expired, fetch new
if (token == null || expiry == null || Instant.now().getEpochSecond() >= expiry) {
Map<String, Object> newToken = generateAuthTokenViaBrowser();
token = (String) newToken.get("token");
expiry = (Long) newToken.get("expiry");
writeTokenToFile(token, expiry);
}
tokenThreadLocal.set(token);
expiryThreadLocal.set(expiry);
}
return token;
}
private static Map<String, Object> generateAuthTokenViaBrowser() {
String bearerToken;
long expiry = Instant.now().getEpochSecond() + TOKEN_VALIDITY_SECONDS;
int maxRetries = 2;
int attempt = 0;
Exception lastException = null;
while (attempt < maxRetries) {
try (Playwright playwright = Playwright.create()) {
Browser browser = playwright.chromium().launch(new BrowserType.LaunchOptions().setHeadless(true));
BrowserContext context = browser.newContext();
Page page = context.newPage();
// Robust wait for login page to load
page.navigate(FrameworkConfigReader.getFrameworkConfig("BaseUrl"), new Page.NavigateOptions().setTimeout(60000));
page.waitForSelector("#email", new Page.WaitForSelectorOptions().setTimeout(20000));
page.waitForSelector("#password", new Page.WaitForSelectorOptions().setTimeout(20000));
page.waitForSelector("button[type='submit']", new Page.WaitForSelectorOptions().setTimeout(20000));
// Fill a login form
page.fill("#email", FrameworkConfigReader.getFrameworkConfig("UserEmail"));
page.fill("#password", FrameworkConfigReader.getFrameworkConfig("UserPassword"));
page.waitForSelector("button[type='submit']:not([disabled])", new Page.WaitForSelectorOptions().setTimeout(10000));
page.click("button[type='submit']");
// Wait for either dashboard element or flexible URL match
boolean loggedIn;
try {
page.waitForSelector(".dashboard, .main-content, .navbar, .sidebar", new Page.WaitForSelectorOptions().setTimeout(20000));
loggedIn = true;
} catch (Exception e) {
// fallback to URL check
try {
page.waitForURL(url -> url.startsWith(FrameworkConfigReader.getFrameworkConfig("BaseUrl")), new Page.WaitForURLOptions().setTimeout(30000));
loggedIn = true;
} catch (Exception ex) {
// Both checks failed
loggedIn = false;
}
}
if (!loggedIn) {
throw new RuntimeException("Login did not complete successfully: dashboard element or expected URL not found");
}
// Extract cookies
String oauthHMAC = null;
String oauthExpires = null;
String token = null;
for (Cookie cookie : context.cookies()) {
switch (cookie.name) {
case "OauthHMAC":
oauthHMAC = cookie.name + "=" + cookie.value;
break;
case "OauthExpires":
oauthExpires = cookie.name + "=" + cookie.value;
if (cookie.expires != null && cookie.expires > 0) {
expiry = cookie.expires.longValue();
}
break;
case "BearerToken":
token = cookie.name + "=" + cookie.value;
break;
}
}
if (oauthHMAC != null && oauthExpires != null && token != null) {
bearerToken = oauthHMAC + ";" + oauthExpires + ";" + token + ";";
} else {
throw new RuntimeException("❗ One or more cookies are missing: OauthHMAC, OauthExpires, BearerToken");
}
browser.close();
Map<String, Object> map = new HashMap<>();
map.put("token", bearerToken);
map.put("expiry", expiry);
return map;
} catch (Exception e) {
lastException = e;
System.err.println("[TokenManager] Login attempt " + (attempt + 1) + " failed: " + e.getMessage());
attempt++;
try { Thread.sleep(2000); } catch (InterruptedException ignored) {}
}
}
throw new RuntimeException("Failed to generate auth token after " + maxRetries + " attempts", lastException);
}
private static void writeTokenToFile(String token, long expiry) {
try {
Map<String, Object> map = new HashMap<>();
map.put("token", token);
map.put("expiry", expiry);
String json = new Gson().toJson(map);
Files.write(Paths.get(TOKEN_FILE), json.getBytes());
} catch (IOException e) {
e.printStackTrace();
}
}
private static Map<String, Object> readTokenFromFile() {
try {
Path path = Paths.get(TOKEN_FILE);
if (!Files.exists(path)) return null;
String json = new String(Files.readAllBytes(path));
return new Gson().fromJson(json, new TypeToken<Map<String, Object>>() {}.getType());
} catch (IOException e) {
return null;
}
}
}
Step 7: Create Framework Config File
A good tester is one who knows the use and importance of config files. In this framework, we are also going to use the config file. Here, we are just going to put the base URL in this config file and will be using the same in the utility file over and over again. The config file contains more data than just of base URL when you start exploring the framework and start automating the new endpoints then at some point, you will realize that some data can be added to the config file.
Additionally, the purpose of the config files is to make tests more maintainable and reusable. Another benefit of a config file is that it makes the code more modular and easier to understand as all the configuration settings are stored in a separate file and it makes it easier to update the configuration settings for all the tests at once.
At this stage, we create the TestRunner class, which serves as the entry point to execute all Cucumber feature files. It uses TestNG as the test executor and integrates Cucumber for running BDD-style test scenarios.
The @CucumberOptions annotation defines:
features → Location of all .feature files.
glue → Packages containing step definitions and hooks.
plugin → Reporting options like JSON and HTML reports.
After execution, Cucumber automatically generates:
Cucumber.json → For CI/CD and detailed reporting.
Cucumber.html → A user-friendly HTML report showing test results.
This setup makes it easy to run all API tests and view clean, structured reports for quick analysis.
Once the framework is set up, you can execute your API automation suite directly from the command line using Maven. Maven handles compiling, running tests, and generating reports automatically.
Run All Tests –
To run all Cucumber feature files:
mvn clean test
clean → Deletes old compiled files and previous reports for a fresh run.
test → Executes all test scenarios defined in your project.
After running this command, Maven will trigger the Cucumber TestRunner, execute all scenarios, and generate reports in the test-output folder.
Run Tests by Tag –
Tags allow you to selectively run specific test scenarios or features. You can add tags like @api1, @smoke, or @regression in your .feature files to categorize tests.
Example:
@api1
Scenario: Verify POST API creates a record successfully
Given User sends "POST" request to "/api/v1/create" ...
Then User verifies the response status code is 201
To execute only scenarios with a specific tag, use:
mvn clean test -Dcucumber.filter.tags="@api1"
The framework will run only those tests that have the tag @api1.
You can combine tags for more flexibility:
@api1 or @api2 → Runs tests with either tag.
@smoke and not @wip → Runs smoke tests excluding work-in-progress scenarios.
This is especially useful when running specific test groups in CI/CD pipelines.
View Test Reports
API Automation Testing Framerwork Report – After the execution, Cucumber generates detailed reports automatically in the test-output directory:
Cucumber.html → User-friendly HTML report showing scenario results and logs.
Cucumber.json → JSON format report for CI/CD integrations or analytics tools.
You can open the report in your browser:
project-root/test-output/Cucumber.html
This section gives testers a clear understanding of how to:
API automation testing framework ensures that backend services are functioning properly before the application reaches the end user. Therefore, by integrating Cucumber, RestAssured, and Playwright, we have built a flexible and maintainable test framework that:
Supports BDD style scenarios.
Handles token-based authentication automatically.
Provides reusable utilities for API calls.
Generates rich HTML reports for easy analysis.
This hybrid setup helps QA engineers achieve faster feedback, maintain cleaner code, and enhance the overall quality of the software.
I am an Jr. SDET Engineer skilled in Manual and Automation Testing (UI & API). Proficient in Selenium, Cucumber, TestNG, Postman, RestAssured, Maven, SQL, GitHub, Jenkins, Java, JavaScript, HTML, and CSS. Experienced in CI/CD integration, framework design, and ensuring high-quality software delivery.
Manual Testing with Playwright MCP – Have you ever felt that a simple manual test should be less manual?
For years, quality assurance relied on pure human effort to explore, click, and record. But what if you could perform structured manual and exploratory testing, generate detailed reports, and even create test cases—all inside your Integrated Development Environment (IDE), using zero code?
I’ll tell you this: there’s a tool that can help us perform manual testing in a much more structured and easy way inside the IDE: Playwright MCP.
Section 1: End the Manual Grind – Welcome to AI-Augmented QA
The core idea is to pair a powerful AI assistant (like GitHub Copilot) with a tool that can control a real browser (Playwright MCP). This simple setup is done in only a few minutes.
The Essential Setup for Manual Testing with Playwright MCP: Detailed Steps
For this setup, you will integrate Playwright MCP as a tool that your AI agent can call directly from VS Code.
1. Prerequisites (The Basics)
VS Code installed in your system.
Node.js (LTS version recommended) installed on your machine.
2. Installing GitHub Copilot (The AI Client)
Open Extensions: In VS Code, navigate to the Extensions view (Ctrl+Shift+X or Cmd+Shift+X).
Search and Install: Search for “GitHub Copilot” and “GitHub Copilot Chat” and install both extensions.
Authentication: Follow the prompts to sign in with your GitHub account and activate your Copilot subscription.
GitHub Copilot is an AI-powered code assistant that acts almost like an AI pair programmer.
After successful installation and Authentication, you see something like below
3. Installing the Playwright MCP Server (The Browser Tool)
Playwright MCP (Model Context Protocol): This is the bridge that provides browser automation capabilities, enabling the AI to interact with the web page.
The most direct way to install the server and configure the agent is via the official GitHub page:
Navigate to the Source: Open your browser and search for the Playwright MCP Server official GitHub page (https://github.com/microsoft/playwright-mcp).
The One-Click Install: On the GitHub page, look for the Install Server VSCode button.
Launch VS Code: Clicking this button will prompt you to open Visual Studio Code.
Final Step: Inside VS Code, select the “Install server” option from the prompt to automatically add the MCP entry to your settings.
To verify successful installation and configuration, follow these steps:
Click on “Configure Tool” icon
After clicking on the “configure tool “ icon, you see the tools of Playwright MCP as shown in the below image.
After clicking on the “Settings” icon, you see the “Configuration (JSON)” file of Playwright MCP, where you start, stop, and restart the server as shown in image below
After the Playwright MCP Server is successfully configured and installed, you will see the output as shown below.
2. Stop and Restart Server
This complete setup allows the Playwright MCP Server to act as the bridge, providing browser automation capabilities and enabling the GitHub Copilot Agent to interact with the web page using natural language.
Section 2: Phase 1: Intelligent Exploration and Reporting
The first, most crucial step is to let the AI agent, powered by the Playwright MCP, perform the exploratory testing and generate the foundational report. This immediately reduces the tester’s documentation effort.
Instead of manually performing steps, you simply give the AI Agent your test objective in natural language.
The Exploration Workflow:
Exploration Execution: The AI uses discrete Playwright MCP tools (like browser_navigate, browser_fill, and browser_click) to perform each action in a real browser session.
Report Generation: Immediately following execution, the AI generates an Exploratory Testing Report. This report is generated on the basis of the exploration, summarizing the detailed steps taken, observations, and any issues found.
Our focus is simple: Using Playwright MCP, we reduce the repetitive tasks of a Manual Tester by automating the recording and execution of manual steps.
Execution Showcase: Exploration to Report
Input (The Prompt File for Exploration)
This prompt directs the AI to execute the manual steps and generate the initial report.
Prompt for Exploratory Testing
Exploratory Testing: (Use Playwright MCP)
Navigate to https://www.demoblaze.com/. Use Playwright MCP Compulsory for Exploring the Module <Module Name> and generate the Exploratory Testing Report in a .md file in the Manual Testing/Documentation Directory.
Output (The Generated Exploration Report) The AI generates a structured report summarizing the execution.
Live Browser Snapshot from Playwright MCP Execution
Once the initial Exploration Report is generated, QA teams move to design specific, reusable assets based on these findings.
1. Test Case Design (on basis of Exploration Report)
The Exploration Report provides the evidence needed to design formal Test Cases. The report’s observations are used to create the Expected Results column in your CSV or Test Management Tool.
The focus is now on designing reusable test cases, which can be stored in a CSV format.
These manually designed test cases form the core of your execution plan.
We need to provide the Exploratory Report for References at the time of design test Cases.
Drag and drop the Exploratory Report File as context as shown in the image below.
Input (Targeted Execution Prompt)
This prompt instructs the AI to perform a single, critical verification action from your Test Case.
Role: Act as a QA Engineer.
Based on Exploratory report Generate the Test cases in below of Format of Test Case Design Template
=======================================
🧪 TEST CASE DESIGN TEMPLATE For CSV File
=======================================
Test Case ID – Unique identifier for the test case (e.g., TC_001)
Test Case Title / Name – Short descriptive name of what is being tested
Preconditions / Setup – Any conditions that must be met before test execution
Test Data – Input values or data required for the test
Test Steps – Detailed step-by-step instructions on how to perform the test
Expected Result – What should happen after executing the steps
Actual Result – What happened (filled after execution)
Status – Pass / Fail / Blocked (result of the execution)
Priority – Importance of the test case (High / Medium / Low)
Severity – Impact level if the test fails (Critical / Major / Minor)
Test Type – (Optional) e.g., Functional, UI, Negative, Regression, etc.
Execution Date – (Optional) When the test was executed
Executed By – (Optional) Name of the tester
Remarks / Comments – Any additional information, observations, or bugs found
Output (The Generated Test cases)
The AI generates structured test cases.
2. Test Plan Creation
The created test cases are organized into a formal Test Plan document, detailing the scope, environment, and execution schedule.
Input (Targeted Execution Prompt)
This prompt instructs the AI to perform a single, critical verification action from your Test Case. 2
Role: Act as a QA Engineer. - Use clear, professional language. - Include examples where relevant. - Keep the structure organized for documentation. - Format can be plain text or Markdown. - Assume the project is a web application with multiple modules. generate Test Cases in Form Of <Module Name >.txt in Manual Testing/Documentation Directory Instructions for AI: - Generate a complete Test Plan for a software project For Our Test Cases - Include the following sections: 1. Test Plan ID 2. Project Name 3. Module/Feature Overview 4. Test Plan Description 5. Test Strategy (Manual, Automation, Tools) 6. Test Objectives 7. Test Deliverables 8. Testing Schedule / Milestones 9. Test Environment 10. Roles & Responsibilities 11. Risk & Mitigation 12. Entry and Exit Criteria 13. Test Case Design Approach 14. Metrics / Reporting 15. Approvals
Output (The Generated Test plan)
The AI generates structured test plan of designed test cases.
3. Test Cases Execution
This is where the Playwright MCP delivers the most power: executing the formal test cases designed in the previous step.
Instead of manually clicking through the steps defined in the Test Plan, the tester uses the AI agent to execute the written test case (e.g., loaded from the CSV) in the browser.
The Playwright MCP ensures the execution of those test cases is fast, documented, and accurate.
Any failures lead to immediate artifact generation (e.g., defect reports).
Input (Targeted Execution Prompt)
This prompt instructs the AI to perform a single, critical verification action from your Test Case.
Use Playwright MCP to Navigate “https://www.demoblaze.com/” and Execute Test Cases attached in context and Generate Test Execution Report.
First, Drag and drop the test case file for references as shown in the image below.
Live Browser Snapshot from Playwright MCP Execution
Output (The Generated Test Execution report)
The AI generates structured test execution report of designed test cases.
4. Defect Reporting and Tracking
If a Test Case execution fails, the tester immediately leverages the AI Agent and Playwright MCP to generate a detailed defect report, which is a key task in manual testing.
Execution Showcase: Formal Test Case Run (with Defect Reporting)
We will now execute a Test Case step, intentionally simulating a failure to demonstrate the automated defect reporting capability.
Input (Targeted Execution Prompt for Failure)
This prompt asks the AI to execute a check and explicitly requests a defect report and a screenshot if the assertion fails.
Refer to the test cases provided in the Context and Use Playwright MCP to execute the test, and if there is any defect, then generate a detailed defect Report. Additionally, I would like a screenshot of the defect for evidence.
Output (The Generated Defect report and Screenshots as Evidence)
The AI generates a structured defect report of designed test cases.
Conclusion: Your Role is Evolving, Not Ending
Manual Testing with Playwright MCP is not about replacing the manual tester; it’s about augmenting their capabilities. It enables a smooth, documented, and low-code way to perform high-quality exploratory testing with automated execution.
Focus on Logic: Spend less time on repetitive clicks and more time on complex scenario design.
Execute Instantly: Use natural language prompts to execute tests in the browser.
Generate Instant Reports: Create structured exploratory test reports from your execution sessions.
Future-Proof Your Skills: Learn to transition seamlessly to an AI-augmented testing workflow.
It’s time to move beyond the traditional—set up your Playwright MCP today and start testing with the power of an AI-pair tester!
Automation always comes with surprises. Recently, I stumbled upon one such challenge while working on a scenario that required automating PDF download using Playwright to verify a PDF download functionality. Sounds straightforward, right? At first, I thought so too. But the web application I was dealing with had other plans.
The Unexpected Complexity
Instead of a simple file download, the application displayed the report PDF inside an iframe. Looking deeper, I noticed a blob source associated with the PDF. Initially, it felt promising—maybe I could just fetch the blob and save it. But soon, I realized the blob didn’t actually contain the full PDF file. It only represented the layout instructions, not the content itself.
Things got more interesting (and complicated) when I found out that the entire PDF was rendered inside a canvas. The content wasn’t static—it was dynamically displayed page by page. This meant I couldn’t directly extract or save the file from the DOM.
At this point, downloading the PDF programmatically felt like chasing shadows.
The Print Button Dilemma
To make matters trickier, the only straightforward option available on the page was the print button. Clicking it triggered the system’s file explorer dialog, asking me to manually pick a save location. While that works fine for an end-user, for automation purposes it was a dealbreaker.
I didn’t want my automation scripts to depend on manual interaction. The whole point of this exercise was to make the process seamless and repeatable.
Digging Deeper: A Breakthrough
After exploring multiple dead ends, I finally turned my focus back to Playwright itself. That’s when I discovered something powerful—Playwright’s built-in capability to generate PDFs directly from a page.
The key was:
Wait for the report to open in a new tab (triggered by the app after selecting “Print View”).
Bring this new page into focus and make sure all content was fully rendered.
Use Playwright’s page.pdf() function to export the page as a properly styled PDF file.
The Solution in Action
Here’s the snippet that solved it:
// Wait for new tab to open and capture it
const [newPage] = await Promise.all([
context.waitForEvent("page"),
event.Click("(//span[text()='OK'])[1]", page), // triggers tab open
]);
global.secondPage = newPage;
await global.secondPage.bringToFront();
await global.secondPage.waitForLoadState("domcontentloaded");
// Use screen media for styling
await global.secondPage.emulateMedia({ media: "screen" });
// Path where you want the file saved
const downloadDir = path.resolve(__dirname, "..", "Downloads", "Reports");
if (!fs.existsSync(downloadDir)) fs.mkdirSync(downloadDir, { recursive: true });
const filePath = path.join(downloadDir, "report.pdf");
// Save as PDF
await global.secondPage.pdf({
path: filePath,
format: "A4",
printBackground: true,
margin: {
top: "1cm",
bottom: "1cm",
left: "1cm",
right: "1cm",
},
});
console.log(`✅ PDF saved to: ${filePath}`);
Key Highlights of the Implementation
Capturing the New Tab The Print/PDF Report option opened the report in a new browser tab. Instead of losing control, we captured it with context.waitForEvent(“page”) and stored it in a global variable global.secondPage. This ensured smooth access to the report tab for further processing.
Switching to Print View The dropdown option was switched to Print View to ensure the PDF was generated in the correct layout before proceeding with export.
Emulating Screen Media To preserve the on-screen styling (instead of print-only styles), we used page.emulateMedia({ media: “screen” }). This allowed the generated PDF to look exactly like what users see in the browser.
Saving the PDF to a Custom Path A custom folder structure was created dynamically using Node.js path and fs modules. The PDFs were named systematically and stored under Downloads/ImageTrend/<date>/, ensuring organized storage.
Full-Page Export with Print Background Using Playwright’s page.pdf() method, we captured all pages of the report (not just the visible one), along with background colors and styles for accurate representation.
Clean Tab Management Once the PDF was saved, the secondary tab (global.secondPage) was closed, bringing the focus back to the original tab for processing the next incident report.
What I Learned
This challenge taught me something new: PDFs in web apps aren’t always what they seem. Sometimes they’re iframes, sometimes blob objects, and in trickier cases, dynamically rendered canvases. Trying to grab the raw file won’t always work.
But with Playwright, there’s a smarter way. By leveraging its ability to generate PDFs from a live-rendered page, I was able to bypass the iframe/blob/canvas complexity entirely and produce consistent, high-quality PDF files.
Conclusion:
What started as a simple “verify PDF download” task quickly turned into a tricky puzzle of iframes, blobs, and canvases. But the solution I found—automating PDF download using Playwright with its built-in PDF generation—was not just a fix, it was an eye-opener.
It reminded me once again that automation isn’t just about tools; it’s about understanding the problem deeply and then letting the tools do what they do best.
This was something new I learned, and I wanted to share it with all of you. Hopefully, it helps the next time you face a similar challenge.