Top Manual Testing Interview Questions and Answers

Are you getting ready for a manual testing interview or looking to find a talented manual tester? You've come to the right place. We've put together a helpful list of interview questions, covering everything from basic ideas to more in-depth topics, to help you prepare and feel more confident. Whether you're preparing for an interview or screening potential candidates, these questions will point you in the right direction.

interview_bnr_img.png

Basic

Software testing is a process to evaluate a software application to ensure it is working properly, meets the user's needs, and is free of bugs. It helps developers identify issues before launching the product. Software testing improves the quality, reliability, and satisfaction of the user. Ultimately, what can be achieved through testing is the reduction of risks and costs to deliver a true digital experience.

For example, before launching a banking app, testers check if money transfers happen smoothly and securely.
 

Quality Assurance focuses on planning and setting up the right processes to avoid defects from happening. Quality Control comes in after the product is built and checks for any mistakes before delivery. Software testing happens during development, where the team runs the software to find and fix bugs.
For example, the QA team creates guidelines, the QC team reviews the final product for errors, and testers check the software as it is being developed to catch issues early.
 

The Software Development Life Cycle (SDLC) is a sequential set of steps meant to guide development teams in the planning, developing, testing, and launching of software applications. The method places software development in various stages such as planning, analysis, design, development, testing, deployment, and maintenance.  These steps in the SDLC method will allow us to meet deadlines and budgets, allow for software to meet user needs, as well as improve quality and potentially reduce risks.
Say you’re making an online store. You’d probably start by asking what features shoppers expect, then plan the site layout, build the pages and shopping cart, test for broken links or payment issues, and finally launch it for users.
 

The Software Testing Life Cycle (STLC) is the step-by-step process testers follow, starting from reviewing project requirements all the way to wrapping up after testing. It usually begins with understanding what needs testing, then moves on to planning, creating test cases, running the tests, reporting any bugs, and finally closing the test phase.

For example, if you’re testing a login feature, you’d first write down different test scenarios, check how the system responds, and note any login errors you find.
 

Verification focuses on making sure the product is being built the correct way. For example, this could involve reviewing design documents or checking the code for errors. Validation, on the other hand, is about making sure we are building the right product. This usually involves testing the actual software to confirm it meets user needs and expectations.

When writing tests, it is important to know the difference between a test scenario and a test case. A test scenario gives a broad idea of what needs to be tested, like checking the login feature. A test case goes deeper and provides specific steps to follow. For example, enter a valid username and password, click the login button, and then check if the dashboard appears.

In software testing, keeping good documentation makes the entire process smoother. It helps testers stay clear on what needs to be tested, how the tests are done, and what the outcomes are. This makes it easier for the whole team to stay on the same page. Common examples of test documentation include test plans, test cases, and bug reports.

Black box testing is a software testing approach whereby a tester verifies and validates the functionality of an application without requiring any knowledge of the underlying code or internal structure. Black box testing evaluates the user-level functionality of the software application (what it does rather than how it is working).

Unlike black box testing, white box testing (also referred to as clear box testing or structural testing) is testing an application with knowledge of the internal code logic or architecture. White box testing tests the way an application works by looking at the paths, conditions, and logical structures.

For instance, you might test a calculator app by entering numbers and seeing if it gives the correct result. White box testing, on the other hand, focuses on the inner workings of the code. This could mean reviewing the logic behind how the calculator performs an addition.

Priority and severity are terms used in bug tracking to evaluate and develop software issues. Severity refers to how serious the defect is with respect to how it affects the application, while priority defines how urgent it is from a business or user perspective. Basically, severity is about impact; priority is about urgency. 

Example:
If a small feature crashes but hardly anyone uses it, that’s high severity but low priority. On the other hand, a typo on the homepage might be low severity but high priority since it affects customer perception.

Smoke Testing is a quick round of checks to make sure the main features of an app work after a new build. This is similar to making sure the app opens and its key pages load properly.
Sanity Testing, on the other hand, focuses on specific fixes. For instance, after fixing a login bug, testers would check just that resolution to confirm the issue is resolved.
 

Why waste time screening?

Hire expert developers, vetted and ready in 48 hours

Hire now
hire_block (1).png

Intermediate

When testing software, we go through different layers.
Unit Testing is about checking one specific function or block of code. For example, making sure the cart total in a shopping app is calculated correctly.
Integration Testing ensures that modules work together, such as verifying that the cart connects properly with the payment system.
Then comes System Testing, where the whole app is tested end-to-end, including login, product search, checkout, and so on.
Finally, there's User Acceptance Testing (UAT). This is when real users try the app and give feedback, a crucial step before going live.

Defect cascading in software testing refers to the process by which one defect creates more defects at later points in time, either during development or testing. If a first bug is missed, related ones can follow throughout the system, making it difficult to trace and repair them. Detection in the initial stages is the best way to prevent the cascade.

For example, say there's a miscalculation in the billing module. If that isn’t fixed, the invoice shows the wrong amount, and eventually, even financial reports pull incorrect totals from it.
 

Exploratory testing is a test technique in which testers explore the software in an uncontrolled manner to find defects not specifically defined by pre-written test cases. Use exploratory testing when requirements are missing or there's a need for rapid feedback.

For example, you might open a brand-new mobile app and start tapping through different features just to see what breaks or behaves oddly without any plan, only with intuitions or assumptions.
 

Boundary Value Analysis (BVA) is a black-box test technique that targets the boundaries or limits of an input range, where defects are most likely to occur. Instead of testing all values, testers test the minimum, maximum, just-above boundary value, and just-below boundary value. So those issues, which often occur at the boundaries of input, can be detected. This strategy increases the efficiency of the test cases used to uncover hidden defects near boundaries.

Let’s say a form accepts ages from 18 to 60. To apply BVA, you’d test with 17 and 61 (just outside the range), and 18 and 60 (right on the edges). This helps catch bugs that often show up near boundaries.
 

Equivalence Partitioning is a black-box testing technique in which input data is divided into valid and invalid groups (called partitions), i.e., data groups that are expected to produce the same behavior. Rather than testing all input data, testers complete testing by selecting one value for each group of data, conserving time as only one value is selected, but producing good test coverage. The test will identify the errors quickly when the tester assumes that all values in the partition behave identically.

For example, if a system accepts ages from 18 to 60, we might test one value from the valid range, like 25 or 30, and one from outside it, like 15 or 65. The idea is that any value in each group should behave similarly.
 

Manual testing can get extremely tedious in large projects, especially when we're checking across multiple browsers and devices. It's not just time-consuming, it also leaves room for human error, particularly when tests have to be repeated frequently without automation in place. And when deadlines are tight, it's almost impossible to cover every test scenario thoroughly.

When testing, start with what matters most to the user and the business. For instance, in an online payment platform, you'd want to test the entire payment flow thoroughly, like card processing, error handling, and transaction confirmation. User Interface aspects like button colors or font spacing can be the next priority. Prioritizing this way ensures you catch serious issues before users run into a problem.

It’s important to ensure that every requirement is tested properly. One way to do this is by using a Requirement Traceability Matrix (RTM), which links each requirement to specific test cases. Take the login module, for instance, you’d typically write separate test cases for valid and invalid logins, as well as for password recovery. Reviewing and updating your test scenarios regularly helps ensure complete coverage, especially when requirements change over time.

When you find a bug, make sure to document the steps to reproduce it. It’s even better if you can attach a quick video or screenshot to help the developer see exactly what went wrong. For example, if the login fails even when valid credentials are used, include the error message or relevant logs. Don’t hesitate to check the requirement doc for expected behavior, and if things remain unclear, loop in the lead during your discussion.

When requirements are unclear, it's important to ask questions early. Talk to the product owner or check how similar features were handled in the past. For instance, if there's no mention of the allowed length for login fields, you can either reference existing login modules or get clarity directly from the right person in the team or company. It also helps to document any assumptions made so everyone can see them, especially when reviewing the test report later.

Advanced

The defect life cycle outlines what happens to a bug from the moment it's found until it's either fixed or closed. Typically, it starts as New, gets assigned to a developer, moves to Open, and then goes through stages like Fixed, Retest, Verified, and finally Closed. If the issue isn't resolved during testing, it might get reopened.
For example, say a user reports that valid login credentials aren’t working. The tester logs the bug, the developer works on it, and once fixed, the tester retests it. If everything looks good, the bug is marked as closed. If not, it goes back for another fix.
 

Risk-Based Testing (RBT) is a QA method that utilizes a testing strategy focused on features with the most significant risk of failure and business loss. During manual QA, testers will identify important areas of risk and prioritize testing within those areas. Then they test risk-heavy areas first, and allocate resources based on risk, ensuring testing coverage is both efficient and effective.
For example, in an e-commerce platform, payment processing is a high-risk area. If it fails, users can’t complete purchases. So that module would be tested early and more thoroughly than, say, a product filter.
 

Whenever a change is made, especially in a critical area like checkout, it’s important to go back and rerun the tests that previously passed. This helps catch any unintended side effects. For instance, if the checkout page is updated, you should double-check related features like cart functionality, product selection, and payment flow. If anything breaks, log it right away so it doesn’t go unidentified.
 

When a feature isn’t clearly defined, like a new search filter, it helps to start by asking the team for clarity. That could mean talking to a developer, a business analyst, or the product owner directly. You can also look at how similar filters worked in the past or refer to any user stories or mockups available. If something’s still unclear, make sure to note any assumptions you’re making so there’s a record for review later.

In agile methodology, testing doesn’t have to wait until development is done. It’s more effective to start testing alongside development, especially for features being built in the same sprint. For example, in a two-week sprint, you might begin writing test cases as soon as the user stories are defined, and then execute them once each feature is ready. Prioritizing high-risk areas and staying in sync with developers through daily check-ins helps catch issues early and reduces the gap between testing and development.

When a critical issue shows up in production, say, the payment gateway stops working, it’s important to take the necessary steps fast. Log the bug, notify the team, and release a fix as soon as possible. After the site or the feature is stable, take time to figure out what caused the failure and how it went undetected. It’s also a good idea to expand your regression suite so similar issues are caught earlier next time. 

Track important metrics like the number of defects, how many test cases have been run, the pass or fail rate, and how serious the bugs are. Send out short QA updates every day or week, depending on how fast the deliverables are moving.
Use clear visuals like simple charts or dashboards to make the data easier to understand.

Example: During a sprint, if you run 50 test cases and 40 pass while 5 fail, and there are 3 serious bugs still open, make sure that is highlighted in your report so the team can take action quickly.

When testing a payment system, make sure you consider what the user might go through, not just the successful path, but also what happens if the network drops, the payment fails, or the user accidentally pays twice. Cover full flows end-to-end.

When you’re short on time, inform just a couple of days before release, it's best to focus on the core flows like login, payment, or checkout. Also, ensure everyone on the team knows their responsibilities and tracks daily progress to avoid missing anything critical.
 

Sometimes bugs keep popping up even after a fix. That’s where RCA helps. It’s about figuring out what caused the issue, not just fixing the obvious part.

First, try to reproduce the bug. Then look at what changed recently, maybe in the code or how the feature is used. Talk with the dev or product team if needed. Once you spot the cause, fix it properly and tweak your test cases too.

Example: Say a discount isn’t working during checkout. You dig in and find the promo rule wasn’t updated after a pricing change. Fix that and add a test so it doesn’t occur again.

We've compiled a list of important manual testing questions, from fundamental topics like finding elements and managing waits, to more advanced areas such as test frameworks and testing across different browsers. If you're looking to find the right manual testing professional for your team, WAC can assist you in hiring experienced manual testing engineers. And if you're searching for a new opportunity, be sure to visit our careers page to see the latest job openings.

Hire Top Caliber Manual Testing Engineers

Quickly hire expert Manual Testing Engineers. WAC helps you find vetted talent in 48 hours to supercharge your development efforts.

service_c_study_IKEA.svg
logo_service_caribou.svg
logo.svg
Lulu international.svg

Hire Software Developers

Get top pre-vetted software developers and scale your team in just 48 hours.

Hire Developers Now
team iamge

Insights

ecommerce Digital marketing

Blog12 mins read

Ecommerce Digital Marketing in 2026: Actionable Strategies & Tips

The customer journey

Blog9 mins read

Customer Journey: Understanding the Path from Awareness to Advocacy

AI in Mobile App Development

Blog15 mins read

AI in Mobile App Development: Unlocking the Future of Apps