Complete Manual Software Testing with Real-time Tools Tutorial
1. Introduction to Manual Testing
Manual testing is a type of software testing where testers manually execute test cases without using any automation tools. The goal of manual testing is to identify bugs, defects, and inconsistencies in the software application to ensure it meets user requirements and functions as expected.
Why Manual Testing is Essential:
- Human Insight: Captures subjective aspects like usability, user experience (UX), and aesthetic appeal.
- Exploratory Testing: Allows for ad-hoc and creative testing, uncovering issues that automated scripts might miss.
- Cost-Effective for Small Projects: For small applications or projects with frequently changing requirements, setting up automation might be overkill.
- Early Feedback: Provides quick feedback during initial development phases.
2. SDLC and STLC
Software Development Life Cycle (SDLC)
SDLC is a process followed for a software project within a software organization. It consists of a detailed plan describing how to develop, maintain, replace, and enhance specific software.
Phases typically include: Requirement Gathering, Design, Implementation/Coding, Testing, Deployment, and Maintenance.
Software Testing Life Cycle (STLC)
STLC is a sequence of activities performed by the testing team to ensure the quality of the software. It's an integral part of the SDLC.
- Requirement Analysis: Understanding the functional and non-functional requirements.
- Test Planning: Defining the scope, objectives, strategy, and resources for testing.
- Test Case Development: Designing detailed test cases based on requirements.
- Test Environment Setup: Preparing the necessary hardware and software to execute tests.
- Test Execution: Running the test cases and logging defects.
- Test Cycle Closure: Evaluating test completion criteria, reporting, and sign-off.
3. Types of Manual Testing
Manual testing encompasses various techniques and types to cover different aspects of software quality.
Functional Testing:
- Unit Testing: Testing individual components/modules. (Often done by developers, but testers may review)
- Integration Testing: Testing interactions between integrated units.
- System Testing: Testing the complete integrated system against requirements.
- User Acceptance Testing (UAT): Testing by end-users to verify the system meets business needs.
- Regression Testing: Re-testing existing features to ensure new changes haven't broken them.
- Sanity Testing: A quick, broad test to ensure core functionalities work after minor changes.
- Smoke Testing: A quick, high-level test to ensure the most critical functions work before detailed testing.
Non-Functional Testing (Can have manual aspects):
- Usability Testing: Evaluating the ease of use and user-friendliness.
- Performance Testing: Checking speed, responsiveness, and stability under workload. (Often automated, but initial checks can be manual).
- Security Testing: Identifying vulnerabilities. (Often specialized, but basic checks are manual).
- Compatibility Testing: Checking functionality across different browsers, OS, devices.
- Localization Testing: Testing for specific locales or regions (language, currency, etc.).
Exploratory Testing:
Simultaneous learning, test design, and test execution. Testers explore the application without pre-defined test cases, using their knowledge and intuition.
4. Test Case Management
Effective test case management is crucial for organized and repeatable manual testing.
What is a Test Case?
A set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not.
Key Elements of a Test Case:
- Test Case ID: Unique identifier.
- Test Suite ID/Name: Group the test case belongs to.
- Module/Feature: Specific part of the application being tested.
- Test Priority: High, Medium, Low (based on criticality).
- Pre-conditions: Steps or state required before executing the test.
- Test Steps: Detailed instructions to perform the test.
- Test Data: Specific input data required.
- Expected Result: The anticipated outcome if the feature works correctly.
- Actual Result: The observed outcome after execution.
- Status: Pass, Fail, Blocked, Not Run.
- Post-conditions: Cleanup or state after the test.
- Tester Name: Who executed the test.
- Execution Date: When the test was executed.
- Bug ID (if failed): Reference to the reported defect.
Test Case Example: User Login with Valid Credentials
Test Case ID: TC_LOGIN_001
Test Suite: Authentication
Module: User Management
Test Priority: High
Pre-conditions: User "testuser" exists with password "password123". Application is accessible.
Test Steps:
1. Navigate to the login page.
2. Enter "testuser" in the Username field.
3. Enter "password123" in the Password field.
4. Click the "Login" button.
Test Data: Username: testuser, Password: password123
Expected Result: User is successfully logged in and redirected to the dashboard. Welcome message "Welcome, testuser!" is displayed.
Actual Result: [To be filled during execution]
Status: [To be filled during execution]
Tester Name: [To be filled during execution]
Execution Date: [To be filled during execution]
Bug ID: [To be filled if failed]
Traceability Matrix:
A document that links requirements to test cases. It ensures that every requirement is covered by at least one test case, helping track testing completeness.
5. Bug Tracking and Reporting
Finding bugs is only half the battle; effectively reporting them is key to getting them fixed.
What is a Bug?
A defect or error in the software that causes it to behave incorrectly or produce unexpected results.
Key Elements of a Bug Report:
- Bug ID: Unique identifier.
- Title/Summary: Concise description of the bug.
- Module/Feature: Where the bug was found.
- Severity: How serious is the impact? (e.g., Critical, Major, Minor, Cosmetic)
- Priority: How urgent is it to fix? (e.g., High, Medium, Low)
- Reporter: Who found and reported the bug.
- Date Reported: When the bug was reported.
- Steps to Reproduce: Clear, numbered steps to replicate the bug.
- Actual Result: What happened when the steps were followed.
- Expected Result: What should have happened.
- Environment: Browser, OS, device, application version.
- Attachments: Screenshots, video recordings, log files.
- Status: New, Open, Assigned, Fixed, Retest, Closed, Reopened, Deferred, Duplicate.
- Assigned To: The developer responsible for fixing it.
Bug Report Example: Login button is disabled after entering valid credentials
Bug ID: BUG_AUTH_005
Title/Summary: Login button is disabled after entering valid credentials
Module: User Management
Severity: Major
Priority: High
Reporter: John Doe
Date Reported: 2025-07-18
Steps to Reproduce:
1. Navigate to the login page (e.g., https://yourapp.com/login).
2. Enter valid username "testuser" in the Username field.
3. Enter valid password "password123" in the Password field.
4. Observe the state of the "Login" button.
Actual Result: The "Login" button remains disabled even after both valid username and password are entered. Cannot proceed to login.
Expected Result: The "Login" button should become enabled after both valid username and password are entered, allowing the user to click it and proceed.
Environment: Chrome v126.0, Windows 10, Application Version: 1.0.0-beta
Attachments: screenshot_login_button_disabled.png
Status: New
Assigned To: Unassigned
While manual testing doesn't use automation scripts, it heavily relies on tools for managing the testing process, reporting, and communication.
A. Test Management Tools:
Used to plan, organize, and track test cases, test cycles, and test execution.
- Jira (with plugins like Zephyr Scale/Squad, Xray):
- Overview: Jira is a widely used issue tracking and project management tool. Its plugins extend its functionality to robust test management.
- Key Features: Test case creation, organization into test cycles, execution tracking, linking test cases to requirements and bugs, dashboards, reporting.
- How to use for Manual Testing:
- Create "Test" issue types for test cases.
- Write detailed test steps and expected results within each test issue.
- Organize tests into "Test Cycles" for specific releases or features.
- Execute tests by updating the status (Pass/Fail/Blocked) and adding actual results/screenshots.
- Directly create linked "Bug" issues from failed test cases.
- TestRail:
- Overview: A dedicated, web-based test case management tool.
- Key Features: Centralized test case repository, customizable test plans and runs, rich reporting, integration with bug trackers (Jira, Bugzilla, etc.).
- How to use for Manual Testing:
- Design and store test cases in a structured hierarchy.
- Create test runs by selecting relevant test cases for a specific testing phase.
- Testers execute tests, marking status, adding comments, and attaching evidence.
- Report bugs directly from TestRail to integrated bug tracking systems.
- Tricentis qTest:
- Overview: A comprehensive test management solution, particularly strong for enterprise environments.
- Key Features: Requirement traceability, test planning, execution, defect management, reporting, automation integration.
B. Bug Tracking / Defect Management Tools:
Used to log, track, prioritize, and manage the lifecycle of defects.
- Jira:
- Overview: Primarily an issue tracker, excellent for bug management.
- Key Features: Customizable workflows, fields, robust searching (JQL), dashboards, reporting, notifications.
- How to use for Manual Testing:
- Create "Bug" issue types with detailed descriptions (steps to reproduce, actual/expected results).
- Attach screenshots/videos.
- Assign bugs to developers.
- Track bug status through its lifecycle (New, Open, In Progress, Fixed, Reopened, Closed).
- Collaborate with developers via comments.
- Bugzilla:
- Overview: An open-source web-based bug tracking system.
- Key Features: Comprehensive bug reporting, search capabilities, email notifications, reporting.
- Asana / Trello / Monday.com (Lightweight options):
- Overview: General project management tools that can be adapted for simple bug tracking, especially for smaller teams or agile sprints.
- Key Features: Task boards, checklists, attachments, comments, due dates.
C. Communication and Collaboration Tools:
Essential for team coordination, discussing issues, and sharing information.
- Slack / Microsoft Teams:
- Overview: Chat-based collaboration platforms.
- Key Features: Channels for different topics/projects, direct messaging, file sharing, integrations with other tools (Jira, GitHub).
- How to use for Manual Testing: Quick discussions with developers about bugs, sharing urgent findings, daily stand-ups, asking clarifications on requirements.
- Confluence / Google Docs:
- Overview: Document collaboration platforms.
- Key Features: Creating and sharing test plans, strategy documents, meeting notes, knowledge base.
D. Screenshot and Screen Recording Tools:
Crucial for providing clear evidence of bugs.
- Snipping Tool (Windows) / Screenshot (macOS) / Greenshot (Windows) / Lightshot:
- Overview: Built-in or third-party tools for capturing screenshots.
- Key Features: Select specific areas, add annotations (arrows, text boxes), copy to clipboard, save as image.
- Loom / OBS Studio / ShareX:
- Overview: Tools for recording screen activity.
- Key Features: Record entire screen or specific windows, record audio, basic editing, direct upload/sharing.
- How to use for Manual Testing: Record steps to reproduce complex bugs, demonstrate user flows, provide context for UX issues.
E. Browser Developer Tools:
In-built tools in web browsers (Chrome DevTools, Firefox Developer Tools) are indispensable for web application testing.
- Elements Tab: Inspect HTML and CSS, modify styles on the fly.
- Console Tab: View JavaScript errors, log messages, execute JS commands.
- Network Tab: Monitor network requests (status codes, response times, payloads). Essential for API testing within the browser.
- Application Tab: Inspect local storage, session storage, cookies.
- Performance Tab: Analyze page load and rendering performance.
- Responsive Design Mode: Test how the website looks and behaves on different screen sizes and device types.
-- Example use in Browser DevTools (Console Tab)
console.log("Checking if element exists:", document.getElementById("loginButton"));
-- Example in Network Tab: Check API response
// After performing an action that triggers an API call (e.g., clicking login)
// Go to Network tab, click on the API request, then 'Preview' or 'Response' tab
// to see the JSON/XML response. Check for status codes like 200 (OK), 400 (Bad Request), 500 (Server Error).
7. Best Practices in Manual Testing
- Understand Requirements Thoroughly: Before testing, ensure a deep understanding of what the software is supposed to do.
- Design Comprehensive Test Cases: Write clear, concise, and complete test cases covering all scenarios (positive, negative, boundary).
- Prioritize Tests: Focus on critical functionalities and high-risk areas first.
- Document Everything: Maintain clear records of test cases, execution results, and bug reports.
- Reproduce Bugs Reliably: Ensure reported bugs can be consistently reproduced before assigning them.
- Provide Clear Bug Reports: A well-written bug report saves significant time for developers.
- Regression Test Diligently: Regularly re-test existing features to catch unintended side effects of new code.
- Think Outside the Box (Exploratory Testing): Don't just follow test cases; explore the application for unexpected behavior.
- Collaborate Effectively: Maintain open communication with developers, business analysts, and other stakeholders.
- Continuous Learning: Stay updated with new testing techniques, tools, and industry trends.
8. Career in Manual Testing
Manual testing serves as a foundational role in software quality assurance. While automation is growing, manual testers remain critical for areas requiring human judgment and intuition.
Skills for a Manual Tester:
- Strong analytical and problem-solving skills.
- Attention to detail.
- Good communication (written and verbal).
- Knowledge of SDLC and STLC.
- Ability to design and execute test cases.
- Proficiency in bug reporting.
- Understanding of different testing methodologies (Agile, Waterfall).
- Basic SQL knowledge (for backend data validation).
- Familiarity with various operating systems and browsers.
- Experience with test management and bug tracking tools.
Future Trends:
The role of a manual tester is evolving. Testers are increasingly expected to have a broader understanding of the development process, be proficient with various tools, and even contribute to automation efforts (e.g., by writing pseudo-code or understanding basic scripting concepts).
Manual testing will always be relevant, especially for areas like usability, user experience, and exploratory testing where human intuition is irreplaceable.
Key Takeaway: Manual testing is the cornerstone of software quality, ensuring that applications are not just functional, but also user-friendly and reliable. Mastering the art of test case design, effective bug reporting, and leveraging real-time tools like Jira and browser developer tools are crucial for any manual tester.