A domain name is an online address that offers a user-friendly way to access a website. In the context of Verified domains Python, this refers to verifying that a domain is legitimate and active using Python programming techniques. In the internet world IP address is a unique string of numbers and other characters used to access websites from any device or location. However, the IP address is hard to remember and type correctly, so the domain name represents it with a word-based format that is much easier for users to handle. When a user types a domain name into a browser search bar, it uses the IP address it represents to access the site.
The Domain Name System (DNS) maps human-readable domain names (in URLs or email addresses) to IP addresses. This is the unique identity of any website or company/organization which makes any website unique and verified, It’s still possible for someone to type an IP address into a browser to reach a website, but most people want an internet address to consist of easy-to-remember words, called domain names for example: Google. , Amazon. Etc. and domain names come with different domain extensions for example: Amazon. in, Google.com
A domain also serves several important purposes on the internet. Here are some key reasons why a domain is necessary:
Identification: Domain names are easier to remember than IP addresses, making it simpler to locate resources online.
Branding: A domain name is vital for building a professional online identity, reflecting the nature and purpose of a business.
Credibility: Owning a domain enhances professionalism, showing commitment to a unique online presence.
Email Address: A personalized email linked to a domain looks more professional and builds trust.
Control: Domain ownership gives you control over hosting, email management, and associated content.
SEO: A relevant, keyword-rich domain can improve search engine visibility.
Portability: Owning a domain allows you to change hosting providers while keeping the same web address, ensuring consistency.
Why do we need domain verification?
Verifying a domain name is a key step for businesses and individuals looking to establish credibility, and control over their content, and enhance their presence on digital platforms.
Let’s Understand this using the example:
Verifying your domain helps Facebook to allow rightful parties to edit link previews directly to your content.
This allows you to manage editing permissions over links and contents and prevents misuse of your domain. This includes both organic and paid content.
These verified editing permissions ensure that only trusted employees and partners represent your brand.
Domain Verification Techniques:
Domain verification is a crucial step to make sure your domain is active and not expired. When a domain is verified, users are automatically added to the Universal Directory, so they don’t have to wait for personal approval to log in. This process helps confirm that the domain is legitimate and prevents issues related to fake or misused domains. These are some techniques through which we can verify our domain.
WHOIS Lookup
Requests & Sockets
DNS Verification
Let’s see how we can verify valid domains to find verified domains using Python, you can employ several approaches listed below.
1) WHOIS Lookup:
Use the WHOIS module in Python to perform a WHOIS lookup on a domain. This method provides information about the domain registration, including the registrar’s details and registration date.
Install the whois module using pip install python-whois.
def check_domain(domain):
try:
# Attempt to retrieve information about the given domain using the 'whois' library.
domain_info = whois.whois(domain)
# Check if the domain status is 'ok' (verified).
if domain_info.status == 'ok':
print(f"{domain} is a verified domain.")
else:
print(f"{domain} is not a verified domain.")
# Handle exceptions related to the 'whois' library, specifically the PywhoisError.
except whois.parser.PywhoisError:
print(f"Error checking {domain}.")
# Handle exceptions related to the 'whois' library, specifically the PywhoisError.
except whois.parser.PywhoisError:
print(f"Error checking {domain}.")
2) Request & Socket
Use Python’s request lib and socket to find verified domains For this we need to install these python dependencies requests & socket
Here we are passing hostname as a parameter and socket.gethostbyname(hostname) will give us the IP address for the host socket.create_connection((ip_address, 80)) is used for the socket to bind as a source address before making the connection. When we pass hostname or domain name with the correct extension to this function for example as given in the above function i.e “google.net” it will return True And if the hostname/domain is incorrect it will return false.
To verify a domain in Python, you can use various approaches depending on the type of verification required. Here, is one of the common methods: DNS verification
DNS Verification:
DNS verification involves checking if a specific DNS record exists for the domain. For example, you might check for a TXT record with a specific value.
import dns.resolver
def verify_dns(domain, record_type, expected_value):
try:
answers = dns.resolver.resolve(domain, record_type)
for rdata in answers:
if rdata.to_text() == expected_value:
return True
except dns.resolver.NXDOMAIN:
pass
return False
# dns.resolver.resolve attempts to resolve the specified DNS record type for the given domain
domain = "google.com"
record_type = "TXT"
expected_value = "v=spf1 include:_spf.google.com ~all"
This is a Valid example of the above function where the domain is “google.com”, the function returns True when the record type is “TXT” and the expected value matches Google’s SPF TXT record. If no match is found or if the domain does not exist (it will give an NXDOMAIN exception), it returns False.
A domain name is a crucial component of your online identity, providing a way for people to find and remember your website or online services. Whether for personal use, business, or any other online endeavor, having a domain name is an essential part of establishing a presence on the internet.
Each approach serves a distinct purpose in verifying a domain’s legitimacy. Choose the verification method based on your specific use case and requirements. Verified domains Python methods like DNS verification are often used for domain ownership verification, while WHOIS Lookup provides essential registration details.
Click here to read more blogs like this and learn new tricks and techniques of software testing.
Jyotsna is a Jr SDET which have expertise in manual and automation testing for web and mobile both. She has worked on Python, Selenium, Mysql,
BDD, Git, HTML & CSS. She loves to explore new technologies and products which put impact on future technologies.
What is a Computer System Validation Process (CSV)?
Computer System Validation or CSV is also called software validation. CSV is a documented process that tests, validates, and formally documents regulated computer-based systems, ensuring these systems operate reliably and perform their intended functions consistently, accurately, securely, and traceably across various industries.
Computer System Validation Process is a critical process to ensure data integrity, product quality, and compliance with regulations.
Why Do We Need Computer System Validation Process?
Validation is essential in maintaining the quality of your products. To protect your computer systems from damage, shutdowns, distorted research results, product and sample loss, unstable conditions, and any other potential negative outcomes, you must proactively perform the CSV.
Timely and wise treatment of failures in computer systems is essential, as they can cause manufacturing facilities to shut down, lead to financial losses, result in company downsizing, and even jeopardize lives in healthcare systems.
So, Computer System Validation Process is becoming necessary considering following key points-
Regulatory Compliance: CSV ensures compliance with regulations such as Good Manufacturing Practices (GMP), Good Clinical Practices (GCP), and Good Laboratory Practices (GLP). By validating systems, organisations adhere to industry standards and legal requirements.
Risk Mitigation: By validating systems, organisations reduce the risk of errors, data loss, and system failures. QA professionals play a vital role in identifying and mitigating risks during the validation process.
Data Integrity: CSV safeguards data accuracy, completeness, and consistency. In regulated industries, reliable data is essential for decision-making, patient safety, and product quality.
Patient Safety: In healthcare, validated systems are critical for patient safety. From electronic health records to medical devices, ensuring system reliability is critical.
How to implement the Computer System Validation (CSV) Process?
You can consider your computer system validation when you start a new product or upgrade an existing product. Here are the key phases that you will encounter in the Computer System Validation process:
Planning: Establishing a project plan outlining the validation approach, resources, and timelines. Define the scope of validation, identify stakeholders, and create a validation plan. This step lays the groundwork for the entire process.
Requirements Gathering: Documenting user requirements and translating them into functional specifications and technical specifications.
Design and Development: Creating detailed design and technical specifications. Develop or configure the system according to the specifications. This step involves coding, configuration, and customization.
Testing: Executing installation, operational, and performance qualification tests. Conduct various tests to verify the system’s functionality, performance, and security. Types of testing include unit testing, integration testing, and user acceptance testing.
Documentation: Create comprehensive documentation, including validation protocols, test scripts, and user manuals. Proper documentation is essential for compliance.
Operation: Once validated, you can put the system into operation. Regular maintenance and periodic reviews are necessary to ensure ongoing compliance.
Approaches to Computer System Validation(CSV):
As we study, the CSV involves several steps, including planning, specification, programming, testing, documentation, and operation.Perform each step correctly, as each one is important. CSV can be approached in various ways:
Risk-Based Approach: Prioritize validation efforts based on risk assessment. Identity critical functionalities and focus validation efforts accordingly. This approach includes critical thinking, evaluating hardware, software, personnel, and documentation, and generating data to translate into knowledge about the system.
Life Cycle Approach: This approach breaks down the process into the life cycle phases of a computer system, which are concept, development, testing, production, maintenance and then validate throughout the system’s life cycle phases. This helps to follow continuous compliance and quality.
Scripted Testing: This approach can be robust or limited. Robust scripted testing includes evidence of repeatability, traceability to requirements, and auditability. Limited scripted testing is a hybrid approach that scales scripted and unscripted testing according to the risk of the system.
“V”- Model Approach: Align validation activities with development phases. The ‘V’ model emphasizes traceability between requirements, design and testing.
Process-Based Approach: Validate based on the system’s purpose and processes it serves. First one need to understand how the system interacts with users, data and other systems.
GAMP (Good Automated Manufacturing Practice) Categories: Classify systems based on complexity. It provides guidance on validation strategies for different categories of software and hardware.
Documentation Requirements:
Here are the essential documents for CSV during its different phases:
Validation Planning:
Project Plan:Document outlining the approach, resources, timeline, and responsibilities for CSV.
User Requirements Specification (URS):
User Requirements Document: Defines what the user wants a system must do from a user’s perspective. The system owner, end-users, and quality assurance write it early in the validation process, before the system is created. The URS essentially serves as a blueprint for developers, engineers, and other stakeholders involved in the design, development, and validation of the system or product.
Functional Specification (FS):
Functional Requirements: Detailed description of system functions, it is a document that describes how a system or component works and what functions it must perform.Developers use Functional Specifications (FSs) before, during, and after a project to serve as a guideline and reference point while writing code.
Design Qualification (DQ):
It is specifically a detailed description of the system architecture, database schema, hardware components, software modules, interfaces, and any algorithms or logic used in the system.
Functional Design Specification (FDS): Detailed description of how the system will meet the URS.
Technical Design Specification (TDS): Technical details of hardware, software, and interfaces
Configuration Specification (CS):
Additionally, Specifies hardware, software, and network configurations settings and how these settings address the requirements in the URS.
Installation Qualifications (IQ):
Installation Qualification Protocol: Document verifying that the system is installed correctly.
Operational Qualification (OQ):
Operational Qualification Protocol: Therefore, document verifying that the system functions as intended in its operational environment and fit to be deployed to the consumers.
Performance Qualification (PQ):
Performance Qualification Protocol: Document verifying that the system consistently performs according to predefined specifications under simulated real-world conditions.
Risk Scenarios:
Additionally identification and evaluation of potential risks associated with the system and its use and mitigation strategies.
Standard Operating Procedures (SOPs):
SOP Document, specifically is a set of step-by-step instructions for system use, maintenance, backup, security, and disaster recovery.
Change Control:
Change control refers to the systematic process of managing any modifications or adjustments made to a project, system, product, or service. It ensures that all proposed changes undergo a structured evaluation, approval, implementation, and subsequently its impact and documentation process.
Training Records:
Moreover, documentation of training provided to personnel on system operation and maintenance.
Audit Trails:
In summary, an audit trail is a sequential record of activities that have affected a device, procedure, event, or operation. It can be a set of records, a destination, or a source of records. Audit trails can include date and time stamps, and can capture almost any type of work activity or process, whether it’s automated or manual.
Periodic Review:
Scheduled reviews of the system to ensure continued compliance and performance. Additionally, periodic review ensures that your procedures are aligned with the latest regulations and standards, reducing the risk of noncompliance. Consequently, regular review can help identify areas where your procedures may not be in compliance with the regulations.
Validation Summary Report (VSR):
Validation Summary Report: Consolidates all validation activities performed and results obtained. Ultimately, it is a key document that demonstrates that the system meets its intended use and complies with regulations and standards. It also provides evidence of the system’s quality and reliability and any deviations or issues encountered during the validation process
It provides a conclusion on whether the system meets predefined acceptance criteria.
Traceability Matrix (TM):
Links validation documentation (URS, FRS, DS, IQ, OQ, PQ) to requirements, test scripts, and results.
Also known as Requirements Traceability Matrix (RTM) or Cross Reference Matrix (CRM)
By following these processes and documentation requirements, organizations can ensure that their computer systems are validated to operate effectively, reliably, and in compliance with regulatory requirements.
Conclusion
Computer System Validation (CSV) Process, therefore, is essential for ensuring that computer systems in regulated industries work correctly and meet safety standards. By following a structured validation process, organizations can protect data integrity, improve product quality, and reduce the risk of system failures.
Moreover, with ongoing validation and regular reviews, companies can stay compliant with regulations and adapt to new challenges. Ultimately, investing in a solid Computer System Validation approach not only enhances system reliability but also shows a commitment to quality and safety for users and stakeholders alike.
Trupti is a Sr. SDET at SpurQLabs with overall experience of 9 years, mainly in .NET- Web Application Development and UI Test Automation, Manual testing. Having hands-on experience in testing Web applications in Selenium, Specflow and Playwright BDD with C#.
Right test Automation Tools – Automation Testing is becoming increasingly essential for accelerating release cycles and enhancing software quality. While it can save significant time and effort, the success of automation largely depends on choosing the right tool for the job. Rather than opting for the most popular option, it’s crucial to select a tool that aligns with your specific project needs.
Here’s a simple breakdown of the key factors to consider for choosing the Right Test Automation Tools.
Start by asking: What does my project really need?
1. Understand Your Project Requirements
Before anything, get a clear picture of what your project needs in terms of testing.
Application Type: Are you testing a web, mobile, or desktop app? Some tools focus on one, while others handle multiple platforms.
For example:
Web apps may also need cross-browser testing or UI/Usability checks.
Mobile apps might require testing across Android, iOS, and tablets; therefore, Will you use real devices or emulators?
Type & Level of Testing: What kind of testing does your project demand — whether it’s functional, non-functional, regression, or integration?
Functional Testing: Make sure the tool supports the platforms and technologies your app uses (e.g., APIs, databases).
Non-Functional Testing: You’ll also want a tool that can handle Performance testing , Load testing and Security testing.
Regression Testing: Consider a tool that simplifies updating test scripts as the application evolves.
Technology Stack: The tool should therefore work well with the technology your application is built on.
Example:
Furthermore, ensure it supports programming languages your team knows (Java, Python, C#) and integrates smoothly with your CI/CD pipelines (Jenkins, GitLab).
If your app uses Angular, Protractor might be a good fit.
2. Mind Your Budget
Automation tools come with various costs, and it’s important to budget wisely.
Learning Time: If the tool is easy to learn, your team can become productive faster, saving both time and money.
Efficiency: Tools that make it quick and simple to create and maintain test cases will save resources in the long run.
Human Resources: Therefore, Consider using AI-based or low-code/no-code tools that reduce the need for manual intervention and specialized skills, which can lower costs.
Maintenance Costs: Furthermore, don’t forget the long-term factor in costs for upgrades, support, and maintenance throughout the project.
Open-Source vs Paid: Open-source tools can help reduce costs upfront, while paid tools often offer advanced features, support, and flexible pricing. Some offer free trials or team subscriptions to give you a chance to evaluate before committing.
3. Consider Your Team’s Skill Set
The tool you choose should match your team’s skill set.
Beginner Team: If your team is new to automation, opt for low-code or codeless tools that are user-friendly and quick to adopt.
Advanced Team: If your team is well experienced, go for a tool with more customization options to take full advantage of their expertise.
The ease of adoption directly impacts your team’s productivity and the overall success of your automation efforts.
4. Scalability and Maintenance
Automation isn’t a one-time activity. Over time, your test cases will need updates.
Test Case Maintenance: As your app evolves, old test cases may no longer find bugs (“pesticide paradox”). Look for tools that make it easy to update and maintain test scripts.
Self-Healing Abilities: Some tools can automatically adapt to minor changes in your application, reducing the need for constant script updates.
Customization: Choose tools that allow users to customize their tests based on their skills and project needs, so both beginners and experts can work effectively.
5. Integration with Test Case Management, Defect Management and Version Control Systems
The right tool should integrate smoothly with your Test Case Management, defect management and version control systems.
Test Case Management: Ensure the tools support the integration with Test Case management tools to make sure the tests are marked as automated, generate test execution reports etc.
Defect Reporting: Ensure the tool can easily track and report bugs.
Version Control: Some tools let you track changes over time, so you can compare previous and current versions. This can be crucial for debugging and maintaining test integrity.
6. Collaboration and Communication
Collaboration between teams is key for successful automation. Look for tools with features that improve teamwork.
Automated Notifications: Some tools offer features that notify team members of updates or executions in real time, keeping everyone on same page.
Cross-Department Collaboration: Tools with shared dashboards or collaborative features can improve team coordination.
7. Robust Reporting Mechanism
Detailed reports are a must! You’ll want to quickly identify problem areas and track progress.
Step-by-Step Logs: Look for tools that provide step-by-step logs, screenshots, video recordings, and error logs.
Graphical visualizations: Visual reports provide an instant overview of testing results, helping you identify issues faster.
8. AI Integration
AI-driven tools can significantly enhance automation by.
Auto-generating code: Reducing the time needed for script creation.
Improving test coverage: By generating various combinations of test data and scenarios.
Self-healing: Automatically adjust test scripts when application elements change, reducing maintenance efforts.
Conclusion
Selecting the right automation tool is more than just picking the most popular option. By understanding your project requirements, budget, team skills, and long-term scalability needs, you can make an informed choice. The right tool will not only fit your technical needs but also help your team work more efficiently and deliver higher-quality products faster.
Manisha is a Lead SDET at SpurQLabs with overall experience of 3.5 years in UI Test Automation, Mobile test Automation, Manual testing, database testing, API testing and CI/CD. Proven expertise in creating and maintaining test automation frameworks for Mobile, Web and Rest API in Java, C#, Python and JavaScript.
Building a solenoid control system with a Raspberry Pi to automate screen touch means using the Raspberry Pi as the main controller for IoT Solenoid Touch Control. This system uses relays to control solenoids based on user commands, allowing for automated and accurate touchscreen actions. The Raspberry Pi is perfect for this because it’s easy to program and can handle the timing and order of solenoid movements, making touchscreen automation smooth and efficient. Additionally, this IoT Solenoid Touch Control system is useful in IoT (Internet of Things) applications, enabling remote control and monitoring, and enhancing the versatility and functionality of the setup.
Components Required:
Raspberry Pi (Any model with GPIO pins):
In our system, the Raspberry Pi acts as the master unit, automating screen touches with solenoids and providing a central control hub for hardware interactions. Its ability to seamlessly establish SSH connections and dispatch commands makes it highly efficient in integrating with our framework.
Key benefits include:
Effective Solenoid Control: The Raspberry Pi oversees and monitors solenoid operations, ensuring precise and responsive automation.
Remote Connectivity: With internet access and the ability to connect to other devices, the Raspberry Pi enables remote control and monitoring, enhancing flexibility and convenience.
Command Validation and Routing: Upon receiving commands, the Raspberry Pi validates them and directs them to the appropriate hardware or slave units. For instance, it can forward a command to check the status of a smart lock, process the response, and relay the information back to the framework.
Solenoide Holder(fix the solenoid):
A solenoid holder is crucial for ensuring the stability, protection, and efficiency of a solenoid control system. It simplifies installation and maintenance while improving the overall performance and extending the solenoid’s lifespan.
In this particular setup, the solenoid holders are custom-manufactured to meet the specific requirements of my system. Different screen setups may require differently designed holders.
Incorporating a solenoid holder in your Raspberry Pi touchscreen control system results in a more robust, reliable, and user-friendly solution.
Solenoid (Voltage matching your power supply):
Integrating solenoids into a Raspberry Pi touchscreen setup offers an effective method for adding mechanical interactivity and automating screen touches. To ensure optimal performance, it’s essential to choose a solenoid with the right voltage, current rating, and size for your specific application.
Whether you’re automating tasks, enhancing user experience, or implementing security features, solenoids play a vital role in achieving your project goals. With careful integration and precise control, they enable you to create a dynamic and responsive system.
Relay Module (Matching solenoid voltage and current rating):
A relay module acts as a switch controlled by the Raspberry Pi, enabling safe and isolated control of higher-power solenoids. To ensure reliable operation, choose a relay that can handle the solenoid’s current requirements.
Relay modules simplify complex wiring by providing clear connection points for your Raspberry Pi, power supply, and the devices you wish to control. These modules often come with multiple relays (e.g., 1, 2, 4, or 8 channels), allowing independent control of several devices.
Key terminals include:
COM (Common): The common terminal of the relay switch, typically connected to the power supply unit you want to switch.
NO (Normally Open): Disconnected from the COM terminal by default. When the relay is activated, the NO terminal connects to COM, completing the circuit for your device.
NC (Normally Closed): Connected to COM in the unactivated state. When the relay activates, the connection between NC and COM breaks.
Touchscreen display:
Touchscreens are like interactive windows on our devices. Imagine a smooth surface that reacts to your fingertip. This is the magic of touchscreens. They use hidden sensors to detect your touch and tell the device where you pressed. This lets you tap icons, swipe through menus, or even draw pictures – all directly on the screen. No more hunting for tiny buttons, just a natural and intuitive way to control your smartphones, tablets, and many other devices.
Breadboard and Jumper Wires:
Breadboard and jumper wires act as your temporary electronics workbench. They let you connect components without soldering, allowing for easy prototyping and testing. You can push wires into the breadboard’s holes to create circuits, making modifications and troubleshooting a breeze before finalizing the connections.
Voltage level Converter:
In our project, the voltage level converter plays a critical role in ensuring communication between the Raspberry Pi and the relay module. The relay module, like some other devices, needs a specific voltage (5V) to understand and respond to commands. However, the Raspberry Pi’s GPIO pins speak a different voltage language – they can only output signals up to 3.3V.
Directly connecting the relay module to the Raspberry Pi’s GPIO pin wouldn’t work. The lower voltage wouldn’t be enough to activate the relay, causing malfunctions. Here’s where the voltage level converter comes in. It acts as a translator, boosting the Raspberry Pi’s 3.3V signal to the 5V required by the relay module. This ensures clear and compatible communication between the two devices, allowing them to work together seamlessly.
Power Supply (Separate for Raspberry Pi and Solenoid):
We need two separate power supplies for safe and reliable operation.A 5V 2A power supply specifically powers your Raspberry Pi. It provides the lower voltage the Pi needs to function.A separate 24V 10A Switching Mode Power Supply (SMPS) powers the solenoid. This higher voltage and current capacity are necessary for the solenoid’s operation. Using separate power supplies isolates the Raspberry Pi’s delicate circuitry from the potentially higher power fluctuations of the solenoid, ensuring safety and proper operation of both.Each power supply is chosen to meet the specific requirements of its component: 5V for the Pi and a higher voltage/current for the solenoid.
Circuit Diagram:
Power Supply Connections:
Connect the Raspberry Pi power supply to the Raspberry Pi.
Connect the positive terminal of the separate power supply to one side of the solenoid.
Connect the negative terminal of the separate power supply to the common terminal of the relay.
Relay Module Connections:
Connect the Vcc pin of the relay module to the 5V pin of the Raspberry Pi.
Connect the GND pin of the relay module to the GND pin of the Raspberry Pi.
Connect a chosen GPIO pin from the Raspberry Pi (like GPIO 18) to the IN terminal of the relay module. This pin will be controlled by your Python code.
Connect one side of the solenoid to the Normally Open (NO) terminal of the relay module. This means the solenoid circuit is only complete when the relay is activated.
Connecting the Raspberry Pi to the Level Converter:
Connect a GPIO pin from the Raspberry Pi (e.g., GPIO17) to one of the LV channels (e.g., LV1) on the level converter.
Connecting the Level Converter to the Relay Module:
Connect the corresponding high-voltage (HV) pin (e.g., HV1) on the level converter to the IN1 pin of the relay module.
Connect the HV pin on the level converter to the VCC pin of the relay module (typically 5V).
Connect the GND pin on the HV side of the level converter to the GND pin of the relay module.
Powering the Relay Module:
Ensure the relay module is connected to a 5V power supply. This can be done using the 5V pin from the Raspberry Pi or a separate 5V power supply if needed. Connect this to the VCC pin of the relay module.
Ensure the GND of the relay module is connected to the GND of the Raspberry Pi to have a common ground.
Connecting the Relay Module to the Solenoid and 24V Power Supply:
Connect the NO (normally open) terminal of the relay to one terminal of the solenoid.
Connect the COM (common) terminal of the relay to the negative terminal of the 24V power supply.
Connect the other terminal of the solenoid to the positive terminal of the 24V power supply.
Software Setup:
Raspberry Pi Setup:
Let’s make setting up our Raspberry Pi with Raspbian OS, connecting it to Wi-Fi, and enabling VNC feel as straightforward as baking a fresh batch of cookies. Here’s a step-by-step guide:
1. Install Raspbian OS Using Raspberry Pi Imager:
Download Raspberry Pi Imager:
Install the Imager on our computer—it’s like the secret ingredient for our Raspberry Pi recipe.
Prepare Our Micro-SD Card:
Insert our micro-SD card into our computer.
Open Raspberry Pi Imager.
Choose the Raspberry Pi OS version you want (usually the latest one).
Select our SD card. Click “Write” and let the magic happen. This process might take a few minutes.
Connect Our Raspberry Pi via LAN Cable:
Plug one end of an ethernet cable into our Raspberry Pi’s Ethernet port.
Connect the other end to our router (the one with the internet connection).
Power Up Our Raspberry Pi:
Insert the micro-SD card into our Raspberry Pi.
Connect the power supply to our Pi.
Wait for it to boot up like a sleepy bear waking from hibernation.
Configure Wi-Fi and Enable VNC:
Find Our Raspberry Pi’s IP Address:
On our Raspberry Pi, open a terminal (you can find it in the menu or use the shortcut Ctrl+Alt+T).
Type hostname -I and press Enter. This will reveal our Pi’s IP address.
Access Our Router’s Admin Interface:
Open a web browser and enter our router’s IP address (usually something like 192.168.1.1) in the address bar.
Log in using our router’s credentials (check the manual or the back of our router for the default username and password)
Assign a Static IP to Our Raspberry Pi:
Look for the DHCP settings or LAN settings section.
Add a new static IP entry for our Raspberry Pi using the IP address you found earlier. Save the changes.
Enable VNC on Our Raspberry Pi:
On our Raspberry Pi, open the terminal again.
Type sudo raspi-config and press Enter.
Navigate to Interfacing Options > VNC and enable it.
Exit the configuration tool.
Access Our Raspberry Pi Remotely via VNC:
On our computer (not the Raspberry Pi), download a VNC viewer application (like RealVNC Viewer).
Open the viewer and enter our Raspberry Pi’s IP address.
When prompted, enter the password you set during VNC setup on our Pi.
2. Install Python Libraries:
Use the Raspberry Pi terminal to install the necessary Python libraries. You’ll likely need:
3. Python Code Development:
Write Python code to:
Activate the corresponding GPIO pin based on the touched button to control the relay.
Python code:
import RPi.GPIO as GPIO
import time
# GPIO pin numbers for the relays
relay_pins = [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
def setup():
GPIO.setmode(GPIO.BCM) # Use BCM GPIO numbering
for pin in relay_pins:
GPIO.setup(pin, GPIO.OUT) # Set each pin as an output
GPIO.output(pin, GPIO.HIGH) # Initialise all relays to off (assuming active low)
def activate_solenoid(solenoid_number, duration=1):
if 1 <= solenoid_number <= 12:
pin = relay_pins[solenoid_number - 1]
GPIO.output(pin, GPIO.LOW) # Turn on the relay (assuming active low)
time.sleep(duration) # Keep the solenoid activated for the specified duration
GPIO.output(pin, GPIO.HIGH) # Turn off the relay
def cleanup():
GPIO.cleanup()
def get_user_input():
while True:
try:
user_input = input("Enter the solenoid number to activate (1-12), or 'q' to quit: ")
if user_input.lower() == 'q':
break
solenoid_number = int(user_input)
if 1 <= solenoid_number <= 12:
activate_solenoid(solenoid_number)
else:
print("Please enter a number between 1 and 12.")
except ValueError:
print("Invalid input. Please enter a number between 1 and 12, or 'q' to quit.")
if _name_ == "_main_":
try:
setup()
get_user_input()
except KeyboardInterrupt:
print("Program terminated")
finally:
cleanup()
Additional Considerations:
Flyback Diode: Adding a flyback diode across the solenoid protects the circuit from voltage spikes when the relay switches.
Status LEDs: LEDs connected to the GPIO pins can visually indicate relay and solenoid activation.
Security Measures: Consider password protection or other security features to control solenoid activation, especially for critical applications.
Putting it all Together:
Assemble the circuit on a breadboard, following the connection guidelines.
Flash the Raspberry Pi OS with your written Python code.
Design and implement the touchscreen interface using your chosen framework.
Test the system thoroughly to ensure proper functionality and safety.
Remember:
Always prioritize safety while working with electronics. Double-check connections and voltage ratings before powering on.
Conclusion
In conclusion, building a solenoid control system using a Raspberry Pi for IoT-based automated screen touch demonstrates a seamless integration of hardware and software to achieve precise and automated touchscreen interactions. The Raspberry Pi’s versatility and ease of programming make it an ideal choice for controlling solenoids and managing relay operations in IoT Solenoid Touch Control systems. This system not only enhances the efficiency and accuracy of automated touch actions but also expands its potential through IoT capabilities, allowing for remote control and monitoring. By leveraging the power of the Internet of Things, the IoT Solenoid Touch Control project opens up new possibilities for automation and control in various applications, from user interface testing to interactive installations.
Click here to read more blogs like this and learn new tricks and techniques of software testing.
As a Software Development Engineer in Test (SDET), I specialize in developing automation scripts for mobile applications with integrated hardware for both Android and iOS devices. In addition to my software expertise, I have designed and implemented PCB layouts and hardware systems for integrating various components such as sensors, relays, Arduino Mega, and Raspberry Pi 4. I programmed the Raspberry Pi 4 and Arduino Mega using C/C++ and Python to control connected devices. I developed communication protocols, including UART, I2C, and SPI, for real-time data transmission and also implemented SSH communication to interface between the hardware and testing framework.
Preclinical trialsplay a critical role in the pharmaceutical industry, focusing on ensuring a new drug’s safety and efficacy before testing it in humans. As part of this process, preclinical software testing has emerged as an essential element in modern drug development. It ensures systems for managing, analyzing, and reporting preclinical data function correctly, securely, and comply with industry standards.
Preclinical trials are the foundational steps in the drug development process. Laboratories and researchers conduct these experiments on animals to gather crucial data on a drug’s safety, efficacy, and pharmacological properties before testing it on humans.
In the complex, regulated world of drug development, preclinical trials form the foundation for pharmaceutical advancements. These trials are the first step in bringing a new drug from the lab to the patient’s bedside.
Why are preclinical trials crucial?
Safety: Identifying potential side effects and toxicities early on protects human volunteers in clinical trials.
Efficacy: Evaluating a drug’s effectiveness in treating a specific disease or condition.
Dosage: Determining the optimal dosage for human use.
Pharmacokinetics and Pharmacodynamics: Understanding how a drug is absorbed, distributed, metabolized, and excreted, and how it exerts its therapeutic effects.
Regulatory Approval: Regulatory bodies, like the FDA, mandate thorough preclinical testing before approving a drug’s progression to human clinical trials. This ensures that only drugs with a reasonable safety profile move forward.
Risk Reduction: Preclinical trials identify issues early, reducing the risk of failure in costly later stages like clinical trials.
Definition and Role of Preclinical Trials
Preclinical trials are the phase of drug development that occurs before clinical trials (testing in humans) can begin. They involve a series of laboratory tests, animal studies designed to provide detailed information on drug’s safety, pharmacokinetics, and pharmacodynamics. These trials are crucial for identifying potential issues early, ensuring that only most promising drug candidates proceed to human testing.
Safety Evaluation and Toxic Effect Identification
Primary Objective: The foremost goal of preclinical trials is to assess the safety profile of a new drug candidate. Before any new drug can be tested in humans, it must be evaluated for potential toxic effects in animals. This includes identifying any adverse reactions that could occur.
Toxicology Studies: These studies aim to find a drug’s potential toxicity, identify affected organs, and determine harmful dosage levels. Understanding these parameters is critical to ensuring that the drug is safe enough to move forward into human trials
Testing in Animal Models
Proof of Concept: Preclinical trials help establish whether a drug is effective in treating the intended condition. Researchers conduct in vitro and in vivo experiments to determine if the drug achieves the desired therapeutic effects.
Mechanism of Action: These trials also help in understanding the mechanism by which the drug works, providing insights into its potential effectiveness and helping to refine the drug’s design and formulation.
Pharmacokinetics and Pharmacodynamics Analysis
Drug Behavior: Preclinical studies examine how a drug is absorbed, distributed, metabolized, and excreted in the body (pharmacokinetics). They also investigate the drug’s biological effects and its mechanisms (pharmacodynamics).
Dose Optimization: Understanding these properties is crucial for determining the appropriate dosing regimen for human trials, ensuring that the drug reaches the necessary therapeutic levels without causing toxicity.
Regulatory Compliance and Approval Requirements
Compliance: Regulatory agencies like the FDA, EMA, and other national health authorities mandate preclinical testing before any new drug can proceed to clinical trials. These trials must adhere to Good Laboratory Practice (GLP) standards, ensuring that the studies are scientifically valid and ethically conducted.
Data Submission: The data generated from preclinical trials are submitted to regulatory bodies as part of an Investigational New Drug (IND) application, which is required to obtain approval to commence human clinical trials.
Ethical Considerations and Alternatives to Animal Testing
Patient Protection: Protecting human volunteers from unnecessary risks is a paramount ethical obligation. Preclinical trials serve to ensure that only drug candidates with a reasonable safety and efficacy profile are tested in humans, thereby safeguarding participant health and well-being.
Alternatives to Animal Testing: There is growing interest in alternative methods, such as in vitro testing using cell cultures, computer modeling, and organ-on-a-chip technologies, which can reduce the need for animal testing and provide additional insights.
Future Advancements in Preclinical Research
Technological Innovations: Advances in biotechnology, such as CRISPR gene editing, high-throughput screening, and artificial intelligence, are poised to revolutionize preclinical research. These technologies can enhance the precision and efficiency of preclinical studies, leading to more accurate predictions of human responses.
Personalized Medicine: The future of preclinical trials also lies in personalized medicine, where drugs are tailored to the genetic makeup of individual patients. This approach can improve the safety and efficacy of treatments, making preclinical trials more relevant and predictive.
Summary of Significance and Impact
Preclinical trials are a vital step in the drug development pipeline, ensuring that new pharmaceuticals are safe, effective, and ready for human testing. By rigorously evaluating potential drugs in these early stages, the pharmaceutical industry not only complies with regulatory standards but also upholds its commitment to patient safety and innovation. Understanding the importance of preclinical trials provides valuable insights into the meticulous and challenging process of developing new therapies that can significantly improve patient outcomes and quality of life.
Role of Preclinical Software Testing in Trials:
Software plays a significant role in preclinical trials, especially in the analysis and management of data. Here’s how software testing is associated with preclinical trials:
Data Management and Analysis: Software is used to manage the vast amount of data generated during preclinical trials. This includes data from various experiments, toxicology studies, and efficacy tests. Software testing ensures that these systems function correctly and handle data accurately.
Simulation and Modeling: Computational models and simulations are often used in preclinical studies to predict how a drug might behave in a biological system. Testing these software models ensures that they are reliable and produce valid predictions.
Regulatory Compliance: Software used in preclinical trials must comply with regulations such as Good Laboratory Practices (GLP). Testing ensures that the software meets these regulatory requirements, which is crucial for the acceptance of trial results by regulatory bodies.
Integration with Laboratory Equipment: Software often controls or interacts with laboratory equipment used in preclinical trials. Thoroughly testing this software is essential to ensure accurate data collection and reliable results.
When it comes to FDA approval, the testing process for drugs and associated systems, including preclinical software testing, involves several critical aspects.
1. Data Integrity and Accuracy:
Testing Focus: As a manual tester, the goal is to ensure that all data entered and stored in the system maintains its integrity and remains free from corruption or unintended changes. This involves testing scenarios related to data entry, storage, modification, and retrieval, verifying that the system accurately processes and displays the data.
Testing Strategy: Testers should manually verify that data cleaning processes work as expected, identifying and flagging any inconsistencies or errors. They must also confirm that the system correctly implements validation rules, ensuring data accuracy.
2. Compliance with Good Laboratory Practices (GLP):
Testing Focus: Testing involves verifying that the software adheres to the standards set by GLP.This includes checking that the system correctly captures changes made to data in the audit trails and retains the data as per GLP regulations.
Testing Strategy: Manual testers should create, modify, and delete data to ensure that they accurately record all activities in the audit trails. Testers must also verify that the system follows data retention policies and ensures data is available for the required retention period.
3. Electronic Records and Signatures:
Testing Focus: Test the functionality of electronic records and signatures to ensure they meet the FDA’s 21 CFR Part 11 requirements, which govern the use of electronic documentation in place of paper records.
Testing Strategy: Testers must verify the accuracy and security of electronic records, ensuring they can create, store, and retrieve them without error. They should test electronic signatures to confirm they are secure, traceable, and properly linked to the corresponding record.
4. Validation of Computational Models:
Testing Focus: Validating computational models manually, as part of preclinical software testing, involves ensuring that the outputs generated are accurate and consistent with expected results, especially when dealing with predictive models in drug trials.
Testing Strategy: A tester should manually verify model predictions by comparing results with known experimental data and run tests to identify any sensitivity in the models to input variations.
5. Risk Management:
Testing Focus: In a manual testing environment, identifying and mitigating risks is essential. Testers must test for potential risks like system crashes, data breaches, or calculation errors and implement appropriate responses.
Testing Strategy: Use risk-based testing to identify high-priority areas that could present the greatest risks to the system. Manual testers must ensure that risk mitigation strategies (like data backup and failover systems) function as intended.
6. Regulatory Submissions:
Testing Focus: Manual testing ensures accurate system data compilation for regulatory submission, maintaining compliance and preventing errors effectively.
Testing Strategy: Testers must manually ensure submission packages include correctly formatted documents and data, verifying completeness and regulatory compliance. They must ensure the system presents the data in a clear and compliant format.
These aspects collectively ensure that manual testing plays a critical role in delivering reliable, accurate, and FDA-compliant software systems. Each testing step ensures quality control, identifies risks, and verifies software behavior matches real-world expectations.
Conclusion:
In the pharmaceutical world, preclinical trials are essential for ensuring drug safety and effectiveness. Preclinical software testing ensures system validation, guaranteeing data accuracy and reliability in trials, playing a crucial behind-the-scenes role. This work helps pave the way for successful drug development, making testers key players in advancing medical innovation.
Click here for more blogs on software testing and test automation.
Jr. SDET proficient in Manual, Automation, and API.
My expertise extends to technologies such as Selenium, Playwright, Postman, SQL, GitLab, and Java. I am keen to learn new technologies and tools for test automation.
Behavior Driven Development (BDD) is a process that promotes collaboration between developers, testers, and stakeholders by writing test cases in simple, plain language. BDD Automation Frameworks like Cucumber use Gherkin to make test scenarios easily understandable and link them to automated tests.
In this guide, we’ll show you how to create a BDD Automation Framework using Java and Playwright. Playwright is a powerful browser automation tool, and when combined with Java and Cucumber, it creates a solid BDD testing framework.
Introduction to BDD Automation Framework:
Automation testing is testing software with the latest tools and technologies with developed scripts in less time. In Automation testing it involves test case execution, data validation, and result reporting.
Why Playwright over Selenium?
Playwright is an open-source Node.js library that further enables efficient end-to-end (E2E) testing of web applications. As Playwright offers better performance speed than Selenium. Also, Playwright offers various features like Cross-Brower support, Multi-platform, Headless and Headful Mode, Async/Await API, Integration with Testing Frameworks.
What is BDD Automation Framework?
BDD framework is an agile approach to test software where testers write test cases in simple language so that non-tech person can also understand the flow. Moreover, it enhances collaboration between the technical team and the business team. We use Gherkin language to write feature files, making them easily readable by everyone.
Prerequisites for BDD Automation Framework:
1. Install JDK
Install the Java environment as per the system compatible.
First, choose the appropriate JDK version, and then click on the download link for the Windows version.
Run the Installer:
Once the download is complete, run the installer.
To begin, follow the installation instructions, then accept the license agreement, and finally choose the installation directory.
Set Environment Variables:
Open the Control Panel and go to System and Security > System > Advanced system settings.
Click on “Environment Variables”.
Under “System Variables,” click on “New” and add a variable named JAVA_HOME with the path to the JDK installation directory (e.g., C:\Program Files\Java\jdk-15).
Find the “Path” variable in the “System Variables” section, click on “Edit,” and add a new entry with the path to the bin directory inside the JDK installation directory (e.g., C:\Program Files\Java\jdk-15\bin).
Verify Installation:
Open a Command Prompt and check if Java is installed correctly by typing `java -version` and `javac -version`.
Click on the link to download the binary zip archive (e.g., apache-maven-3.x.y-bin.zip).
Extract the Archive:
Extract the downloaded zip file to a suitable directory (e.g., C:\Program Files\Apache\maven).
Set Environment Variables:
Open the Control Panel and go to System and Security > System > Advanced system settings.
Click on “Environment Variables”.
Under “System Variables”, click on “New” and add a variable named MAVEN_HOME with the path to the Maven installation directory (e.g., C:\Program Files\Apache\maven\apache-maven-3.x.y).
Find the “Path” variable in the “System Variables” section, click on “Edit”, and add a new entry with the path to the bin directory inside the Maven installation directory (e.g., C:\Program Files\Apache\maven\apache-maven-3.x.y\bin).
Verify Installation:
To check if Maven is installed correctly, open a Command Prompt and type `mvn -version`.
Java Development Kit (JDK): Ensure you have JDK installed and properly configured.
Maven or Gradle: Depending on your preference, however, you’ll need Maven or Gradle to manage your project dependencies.
Steps to Install Cucumber with Maven
Create a Maven Project:
Update pom.xml File:
Open the pom.xml file in your project.
This Maven POM file (pom.xml) defines project metadata, dependencies on external libraries (Cucumber, Selenium, Playwright), and Maven build properties. It provides the necessary configuration for managing dependencies, compiling Java source code, and integrating with Cucumber, TestNG, Selenium, and Playwright frameworks to support automated testing and development of the CalculatorBDD project.
Before starting the project on the BDD Automation Framework:
Create a new Maven project in your IDE.
Add the dependencies in Pom.xml file .
Create folder structure following steps given below:
When we created the new project for the executable jar file, we could see the simple folder structure provided by Maven.
SRC Folder: The SRC folder is the parent folder of a project, and it will also include the main and test foldersIn the QA environment, we generally use the test folder, while we reserve the main folder for the development environment. The development team uses the main folder, so the created JAR contains all the files inside the src folder.
Test Folder: Inside the test folder; additionally, Java and resources folders are available.
Java Folder: This folder primarily contains the Java classes where the actual code is present.
Resources Folder: The Resources folder contains the resources file, test data file, and document files.
Pom.xml: In this file, we are managing the dependencies and plugins that are required for automation.
As our project structure is ready so we can start with the BDD framework:
1. Feature file:
Here we have described the scenario in “Gherkin” language which is designed to be easily understandable by non-technical stakeholders as well as executable by automation tools like Cucumber. Each scenario is written in structured manner using keywords “Given”, “When” and “Then”. Calculator.feature in this we have specifically written our functional testing steps.
@Basic
Feature: Verify Calculator Operations
Scenario Outline: Verify addition of two numbers
#Given line states that the User is on the Calculator home page and Calculator page is displayed.
Given I am on Calculator page
#When step describes an action that User enters/clicks on a number
When I enter number <number>
#And step indicates clicking on a specific operator (like addition, subtraction, etc.) on the calculator
And I click on operator '<operator>'
#And Step follows the operator click by entering another number into the calculator.
And I enter number <number1>
#Then is the verification step where the test checks if the result displayed
Then I verify the result as <expectedResult>
Examples:
| number | operator | number1 | expectedResult |
| 5 | + | 2 | 7 |
| 9 | - | 3 | 6 |
| 6 | * | 4 | 24 |
| 2 | / | 2 | 1 |
2. Step Def File:
The step definition file serves as the bridge between actual feature file with the actual method implementation in the page file. The Calculator steps are a step definition file that maps the feature file to the page file and functional implementation.
package steps;
import core.TestContext;
import io.cucumber.java.en.And;
import io.cucumber.java.en.Given;
import io.cucumber.java.en.Then;
import io.cucumber.java.en.When;
import org.testng.Assert;
import pages.CalculatorPage;
import java.io.IOException;
public class CalculatorSteps extends TestContext {
public CalculatorSteps() {
//Here the constructor initializes a new instance of CalculatorPage,
// so the CalculatorSteps class can interact with the calculator web page through methods in the CalculatorPage class
calculatorPage = new CalculatorPage();
}
@Given("I am on Calculator page")
//Here we call the function from Calculator Page which is Sync with Feature file Given definition
public void iAmOnCalculatorPage() throws IOException {
calculatorPage.iAmOnCalculatorPage();
}
@When("I enter number {int}")
//Here also the function is called from page file and Sync with feature file When step
public void iEnterNumber(int number) {
calculatorPage.iEnterNumber(number);
}
@And("I click on operator {string}")
//Here also the function is called from page file and sync with feature file And step
public void iClickOnOperator(String operator) {
calculatorPage.iClickOnOperator(operator);
}
@Then("I verify the result as {int}")
//Here also the function is called from page file and synched with feature file Then step
public void iVerifyTheResultAs(int expectedResult) {
String actualResult = calculatorPage.iVerifyTheResultAs();
Assert.assertEquals(actualResult, String.valueOf(expectedResult));
}
}
3. Page File:
Page file, in addition, is actual code implementation from the step definition file.Here, we have saved all the actual methods and web page elements, thereby ensuring easy access and organization. It is basically POM structure. So here we are performing addition operation in Calculator we application so created a method to click on a number and another method for clicking on the operator. Here we can minimize the code by reusing the code as much as possible.
package pages;
import core.TestContext;
import utilities.ConfigUtil;
import java.io.IOException;
public class CalculatorPage extends TestContext {
public void iAmOnCalculatorPage() throws IOException {
page.navigate(ConfigUtil.getPropertyValue("base_url"));
}
public void iEnterNumber(int number) {
page.locator("//span[@onclick='r(" + number + ")']").click();
}
public void iClickOnOperator(String operator) {
page.locator("//span[@onclick=\"r('" + operator + "')\"]").click();
}
public String iVerifyTheResultAs() {
page.locator("//span[@onclick=\"r('=')\"]").click();
return page.locator("//div[@id='sciOutPut']").innerText().trim();
}
public void tearDown() {
page.close();
}
}
4. Hooks:
Hooks are setup and teardown methods that, therefore, are written separately in the configuration class. Here we have annotation declare in the hooks file @before and @After. Hooks are steps to be performed a before and after function of the feature file. In this we have open the Web browser in Before and After Tag. These are special functions which allows the testers to execute specific points during execution.
package core;
import core.TestContext;
import io.cucumber.java.After;
import io.cucumber.java.Before;
import io.cucumber.java.Scenario;
import utilities.WebUtil;
public class Hooks extends TestContext {//Hooks class inherits the property of TestContext class
@Before //@Before Tag denotes that it should be executed before scenario
public void beforeScenario(Scenario scenario) {
page = WebUtil.initBrowser(); //this method initializes the browser session
}
@After //@After Tag denotes that it should be executed after scenario
public void afterScenario() {
WebUtil.tearDownPW();//this method is for tasks such as closing browser sessions
}
}
5. TestContext:
The TestContext class, moreover, holds various instances and variables required for test execution. In this context, we have successfully created a web driver instance, a page file instance, and a browser context. As a result, the code reusability, organization, and maintainability are improved here.
package core;
import com.microsoft.playwright.Browser;
import com.microsoft.playwright.Page;
import pages.CalculatorPage;
public class TestContext { //TestContext class, which acts as a container to store all instances for test framework
public static Page page;
//Refers to Playwright’s Page object. This controls a specific browser tab or page in a Playwright-based test
public static CalculatorPage calculatorPage;
//This stores an instance of the CalculatorPage object, representing the page object model (POM)
public static Browser browser;
//refers to Playwright's Browser instance, which represents the entire browser
}
6. TestRunner:
The Test Runner is responsible for discovering test cases, executing them, and reporting the results back; additionally, it provides the necessary infrastructure to execute the tests and manage the testing workflow. It also syncs the feature file with step file.
package core;
import io.cucumber.testng.AbstractTestNGCucumberTests;
import io.cucumber.testng.CucumberOptions;
import org.testng.annotations.DataProvider;
@CucumberOptions(features = "src/test/java/features", //the path where Cucumber feature files are located.
glue = {"steps", "core"}) //Cucumber where to find the step definitions (in the steps and core packages)
public class TestRunner extends AbstractTestNGCucumberTests {
//Above Etends which is a base class provided by Cucumber to run the tests with TestNG
@DataProvider //allows running multiple Cucumber scenarios as separate tests in TestNG
@Override
public Object[][] scenarios() {
return super.scenarios();
}//Calls the parent class method to return all the Cucumber scenarios in an array format for TestNG to run
}
7. WebUtils:
Web Utils is a file in which browser instance is created and playwright is initialised here. The code for web browser page launching is written here and for closing the browser instance. The page is extended by TestContext where all the properties of TestContext are given to WebUtils page.
package utilities;
import com.microsoft.playwright.BrowserType;
import com.microsoft.playwright.Page;
import com.microsoft.playwright.Playwright;
import core.TestContext;
public class WebUtil extends TestContext {
public static Page initBrowser(){
//Initializes a browser session using Playwright's Chromium browser
Playwright playwright = Playwright.create(); //Creates an instance of Playwright
browser = playwright.chromium().launch(new BrowserType.LaunchOptions().setHeadless(false));
page = browser.newPage(); //Creates a new page/tab within the launched browser
return page;
}
public static void tearDownPW() {
page.close();
} // It is called to close the current page/tag
}
This is the important file where we download all the dependencies required for the test execution. Also, it contains information of project and configuration information for the maven to build the project such as dependencies, build directory, source directory, test source directory, plugin, goals etc.
In this blog, we’ve discussed using the Java Playwright framework with Cucumber for BDD. Playwright offers fast, cross-browser testing and easy parallel execution, making it a great alternative to Selenium. Paired with Cucumber, it helps teams write clear, automated tests. Playwright’s debugging tools and test isolation also reduce test issues and maintenance, making it ideal for building reliable test suites for faster, higher-quality software delivery.