UAT Best Practices: How to Run User Acceptance Testing That Actually Works
A practical guide to planning, executing, and closing UAT cycles without the usual chaos
- What UAT Is (And What It Is Not)
- Defining Clear Acceptance Criteria
- Designing UAT Test Cases
- Setting Up the UAT Environment
- Managing Stakeholders During UAT
What UAT Is (And What It Is Not)
User Acceptance Testing is the final validation phase before software goes live. It answers one question: does this product meet the business requirements we agreed on?
UAT is not a substitute for QA testing. By the time a build reaches UAT, it should already be functionally stable, reasonably free of bugs, and tested against technical specifications. UAT is not the place to discover that the login form throws a 500 error — that should have been caught in QA.
What UAT is designed to catch:
- Misunderstood requirements: The feature works exactly as specified, but the specification was wrong — it does not match what the business actually needs.
- Missing workflows: The individual features all work, but the end-to-end business process has a gap. Example: the checkout flow works, but there is no way for a customer to add a purchase order number, which 30% of B2B customers require.
- Usability issues: The feature is technically correct but confusing or inefficient for the people who will use it daily.
- Data and integration issues: The feature works with test data but behaves unexpectedly with real-world data volumes, formats, or edge cases from connected systems.
UAT participants are typically business stakeholders, product owners, subject-matter experts, or actual end users — not developers or QA engineers. The testers' value comes from their domain knowledge, not their technical skills.
Defining Clear Acceptance Criteria
UAT fails most often because acceptance criteria were never clearly defined. If nobody wrote down what "done" looks like, every stakeholder will have a different opinion, and sign-off becomes a negotiation instead of a verification.
What good acceptance criteria look like:
Use the Given/When/Then format for clarity and testability:
- Given a logged-in user with items in their cart, When they click "Checkout" and complete the payment form with a valid credit card, Then an order confirmation page displays with the order number, a confirmation email is sent within 60 seconds, and the order appears in the admin dashboard with status "Paid."
Compare this to vague criteria like "checkout should work" — which is untestable because "work" is undefined.
Acceptance criteria should be:
- Specific: State exact expected behavior, not general outcomes
- Measurable: Include numbers where relevant (response time under 3 seconds, email sent within 60 seconds)
- Agreed upon: Signed off by the product owner or client before development begins, not defined retroactively during UAT
- Independent: Each criterion can be tested on its own without depending on other criteria being true
Template for acceptance criteria documentation:
For each feature or user story, document:
- Feature name and description
- Acceptance criteria (3-8 per feature, using Given/When/Then)
- Out of scope: explicitly state what this feature does not do, to prevent scope creep during UAT
- Dependencies: other features, integrations, or data that must be in place
- Test data requirements: what data the UAT tester will need to execute the tests
Write acceptance criteria during sprint planning or requirements gathering, not during UAT. If you are writing acceptance criteria for the first time during UAT, you are too late — you are now in a negotiation about scope, not a test.
Designing UAT Test Cases
UAT test cases should be written in plain language that a non-technical stakeholder can follow. If your test case includes instructions like "clear the browser cache" or "inspect the network tab," it is too technical for UAT.
UAT test case structure:
- Test ID: A unique identifier (e.g., UAT-CHECKOUT-001)
- Feature: The feature or user story being tested
- Preconditions: What must be true before the test starts (user is logged in, cart has 2+ items, test credit card number is available)
- Steps: Numbered, step-by-step instructions a non-technical person can follow
- Expected result: What should happen after each significant step and at the end of the test
- Actual result: Filled in by the tester during execution
- Status: Pass / Fail / Blocked
- Notes: Free text for observations, screenshots, or concerns
Example test case:
UAT-CHECKOUT-001: Complete a purchase with a credit card
Preconditions: You are logged in as testuser@example.com. Your cart contains at least one item.
- Navigate to the shopping cart page
- Verify the cart shows the correct items, quantities, and prices
- Click "Proceed to Checkout"
- On the shipping page, enter a valid US address and click "Continue"
- On the payment page, enter test card number 4242 4242 4242 4242, any future expiration date, and any 3-digit CVC
- Click "Place Order"
- Verify: an order confirmation page appears with an order number
- Check your email inbox: a confirmation email should arrive within 2 minutes
- Open the confirmation email and verify the order details match
Tips for effective test cases:
- Write 15-30 test cases for a typical feature release; more for large projects
- Cover both happy paths and critical alternate paths (what happens if the credit card is declined?)
- Do not try to cover every edge case — that is QA's job. UAT should focus on real-world business scenarios
- Group test cases into logical suites: "Checkout flow," "Account management," "Reporting," etc.
- Include at least one end-to-end scenario that crosses multiple features (e.g., register → browse → add to cart → checkout → view order history)
Setting Up the UAT Environment
The UAT environment is where your testers will work. Getting it wrong creates noise — bugs that are environment problems, not product problems — and erodes tester confidence.
UAT environment requirements:
- Mirrors production: Same server configuration, same CDN, same third-party integrations (payment gateway in sandbox mode, email service connected to a test inbox, etc.)
- Stable: Do not deploy to the UAT environment while testing is in progress unless absolutely necessary — and if you must, notify all testers. Ideally, freeze deployments for the UAT window.
- Isolated: UAT testers should not share the environment with automated tests, developers debugging, or QA testers running their own cycles simultaneously. Crosstalk creates confusion.
- Pre-populated with realistic data: Empty dashboards and blank databases make it impossible to test reporting, search, filtering, and pagination. Load the environment with realistic (but anonymized) data before UAT begins.
- Accessible: Testers should be able to access the environment without VPN configuration, SSH tunnels, or other technical setup. If the environment requires special access, provide clear instructions and verify access for every tester before UAT begins.
Test account preparation:
Create dedicated test accounts for each UAT tester with the following ready to go:
- Correct role and permissions for the scenarios they will test
- Pre-populated data where needed (past orders, existing projects, sample content)
- Credentials documented in a shared, secure location (not in a Slack message that will scroll away)
- Test payment credentials (Stripe test cards, PayPal sandbox accounts, etc.)
Common environment mistakes:
- UAT environment is running a different code version than what was QA-tested
- Third-party services are in production mode instead of sandbox mode (resulting in real charges, real emails, real API calls)
- SSL certificate issues that cause browser warnings and confuse non-technical testers
- Environment is behind a corporate VPN that external client testers cannot access
Managing Stakeholders During UAT
UAT is as much a communication exercise as a testing exercise. Stakeholders who feel uninformed, rushed, or ignored will delay sign-off, escalate minor issues, or — worst case — lose confidence in the project entirely.
Before UAT begins:
- Hold a UAT kickoff meeting (30-60 minutes). Walk testers through what has changed, what they should focus on, and how to report issues. Show them the environment and verify they can log in.
- Distribute a UAT guide document that includes: timeline, environment URL, credentials, test case list, how to report bugs, who to contact for help, and the sign-off process.
- Set clear expectations about timeline: "UAT runs from Monday March 18 to Friday March 22. We need your results by end of day Friday." Open-ended UAT drags on indefinitely.
- Define what is a blocker vs. what is not. Agree upfront: a cosmetic issue (button color is slightly off) will be logged but will not block launch. A broken checkout flow will block launch.
During UAT:
- Send a daily status update: X test cases executed, Y passed, Z failed, N blocked. Keep it factual and brief.
- Triage reported issues promptly. Nothing frustrates a tester more than reporting a bug and hearing nothing for three days.
- Distinguish between defects (the software does not match the accepted criteria), change requests (the stakeholder wants something different from what was agreed), and environment issues (the bug is in the test setup, not the product). Track them separately.
- If a tester is stuck or confused, help them immediately. UAT time is expensive — the tester is often a senior business stakeholder whose calendar is packed. Do not waste their time with a broken test account.
Handling scope creep:
UAT inevitably surfaces requests like "while we are at it, can we also add..." This is natural — seeing the software in near-final form triggers new ideas. Handle it with a consistent process:
- Thank the stakeholder for the feedback
- Log it as a change request, not a defect
- Confirm it is out of scope for the current release
- Add it to the product backlog for prioritization in a future sprint
This respects the stakeholder's input without derailing the current release.
Defect Tracking and Triage
How you handle defects during UAT determines whether the process runs smoothly or devolves into chaos. The goal is to capture enough information to reproduce and fix each issue, prioritize effectively, and communicate resolution status back to the reporter.
What a good UAT bug report includes:
- Summary: One sentence describing the problem ("Checkout fails with error when shipping address has an apartment number")
- Steps to reproduce: Exact sequence a developer can follow to see the bug
- Expected behavior: What should have happened
- Actual behavior: What actually happened
- Screenshot or screen recording: Visual evidence of the issue
- Browser and device: Captured automatically if using a visual feedback tool like Marker.io
- Severity: Critical (blocks business process), Major (significant issue with workaround), Minor (cosmetic or low-impact), Trivial (nitpick)
Reducing friction for non-technical testers:
Non-technical UAT testers will not voluntarily open browser dev tools or write detailed reproduction steps. This is where visual feedback tools earn their value. Tools like Marker.io, BugHerd, and Userback let testers click a button, annotate a screenshot, type a description, and submit — while the tool automatically captures the URL, browser version, viewport size, console errors, and network metadata. This dramatically improves bug report quality without burdening the tester.
Triage process:
Run a daily triage meeting (15-30 minutes) during the UAT window with the project manager, tech lead, and QA lead:
- Review new defects reported since last triage
- Categorize each as: defect, change request, environment issue, duplicate, or cannot reproduce
- Assign severity and priority
- Assign to a developer for resolution
- Update the UAT tester on the status of their previously reported issues
Resolution statuses:
- Open: Reported but not yet triaged
- Confirmed: Triaged and accepted as a valid defect
- In Progress: Developer is working on a fix
- Fixed — Ready for Retest: Fix deployed to UAT environment, ready for the original reporter to verify
- Verified: Tester confirmed the fix works
- Closed: Issue resolved
- Deferred: Valid issue, but will not be fixed in this release (with documented justification)
The Sign-Off Process
Sign-off is the formal acknowledgment that the software meets the agreed acceptance criteria and is approved for production deployment. Without a clear sign-off process, UAT becomes an indefinite loop of "one more thing."
Sign-off criteria should be defined before UAT begins:
- All critical and major defects are resolved and verified
- All test cases have been executed (100% coverage of the agreed test suite)
- Pass rate meets the agreed threshold (typically 95%+ of test cases passing)
- All deferred issues are documented with justification and a plan for resolution
- Any known issues going to production are documented and accepted by the product owner
Sign-off document template:
- Project name and release version
- UAT period (start date — end date)
- Summary of test execution: total test cases, passed, failed, blocked, deferred
- List of resolved defects
- List of deferred defects with justification
- List of known issues going to production
- Sign-off statement: "I confirm that the software meets the agreed acceptance criteria for [release name] and approve deployment to production."
- Signature lines for each required stakeholder
Who signs off?
Typically the product owner or client project lead. For large projects, you may need sign-off from multiple stakeholders representing different business areas. Define the required signatories before UAT begins — adding new signatories mid-process delays everything.
What if sign-off is refused?
If a stakeholder refuses to sign off, you need to understand why:
- If there are unresolved critical defects: fix them, retest, and request sign-off again
- If the stakeholder wants features that were not in the original scope: escalate to the project sponsor, document the change request, and negotiate timeline separately
- If the stakeholder is uncomfortable but cannot articulate specific issues: schedule a walkthrough session where they show you their concerns in real time
Common UAT Pitfalls and How to Avoid Them
After running UAT on dozens of projects, patterns emerge. These are the most common ways UAT goes wrong and what to do about each.
1. UAT starts with no acceptance criteria
Symptom: Testers are guessing what to test. Feedback is a mix of legitimate bugs, design preferences, and feature requests. There is no objective way to determine pass/fail.
Fix: Go back and write acceptance criteria, even retroactively. It is late, but better than continuing without them. In future projects, mandate that acceptance criteria are approved before development begins.
2. The UAT environment is broken or unrealistic
Symptom: Testers report bugs that are actually environment issues. Time is wasted reproducing and triaging false positives. Testers lose confidence and disengage.
Fix: Dedicate time to verify the UAT environment end-to-end before inviting testers. Run through the key test cases yourself first. Fix environment issues before opening the floodgates.
3. Testers are too busy to actually test
Symptom: The UAT window passes with 20% of test cases executed. Testers say they "did not have time" because it was not in their calendar or priorities.
Fix: Get UAT time committed on testers' calendars during project planning, not one week before launch. Set specific daily time blocks (e.g., 2 hours per day for 5 days) rather than asking for "some time this week." Send daily reminders with progress stats.
4. Scope creep disguised as defects
Symptom: The defect list keeps growing even after fixes are deployed, because stakeholders keep adding new requirements.
Fix: Categorize every reported issue as defect, change request, or enhancement during triage. Only defects (deviations from agreed acceptance criteria) can block sign-off. Change requests go to the backlog.
5. No definition of "done"
Symptom: UAT has been running for three weeks. Nobody knows when it will end. The launch date keeps slipping.
Fix: Define exit criteria before UAT starts. Time-box the UAT window. Make sign-off criteria explicit and binary — either the criteria are met or they are not.
6. Bug reports lack detail
Symptom: Developers cannot reproduce issues from UAT reports. Back-and-forth communication doubles the resolution time.
Fix: Provide testers with a visual feedback tool that captures context automatically. Train testers on what a good bug report looks like — even a 5-minute walkthrough during the kickoff meeting helps.
UAT Timeline Template
Here is a realistic timeline for a UAT cycle on a medium-complexity web project (10-30 features, 5-10 UAT testers).
Week -2: Preparation
- Finalize acceptance criteria for all features (should already exist from sprint planning)
- Write UAT test cases (15-30 test cases covering critical paths)
- Prepare UAT environment and test accounts
- Distribute UAT guide to testers
- Schedule kickoff meeting and tester calendar blocks
Week -1: QA verification and environment validation
- Internal team runs through all UAT test cases to verify they are executable and the environment is stable
- Fix any blocking issues found during dry run
- Confirm all testers have access and credentials work
Day 1: Kickoff
- 30-60 minute kickoff meeting: walk through changes, demonstrate key features, explain reporting process
- Testers begin executing test cases
- QA lead available for support throughout the day
Days 2-4: Test execution
- Testers work through assigned test cases at their own pace
- Daily 15-minute triage standup to review new defects
- Developers fix critical and major defects as they are triaged
- Daily status email to all stakeholders
Day 5: Retest and wrap-up
- Testers verify fixed defects
- Final triage of remaining issues: resolve, defer, or reclassify
- Compile sign-off document
Day 6: Sign-off
- Present UAT results to stakeholders
- Obtain formal sign-off
- Schedule production deployment
Adjust this timeline based on project size: A small project (3-5 features, 2-3 testers) might compress to 3 days of testing. A large project (50+ features, 20+ testers) might need 2-3 weeks.
Frequently Asked Questions
Who should perform UAT testing?
UAT should be performed by people who represent the end users of the software — typically business stakeholders, product owners, subject-matter experts, or actual end users. They should have domain knowledge of the business processes the software supports. Developers and QA engineers should not perform UAT because they lack the fresh perspective and domain-specific expectations that make UAT valuable.
How many UAT testers do you need?
For most web projects, 3-8 UAT testers is ideal. Fewer than 3 risks missing important perspectives. More than 10 creates coordination overhead that often outweighs the additional coverage. Choose testers who represent different user roles, business units, or usage patterns. Quality of testers matters more than quantity — one engaged stakeholder who knows the business process deeply is worth five testers who click around randomly.
What happens if UAT finds too many bugs?
If UAT surfaces a high volume of defects, it usually indicates that QA testing was insufficient or that requirements were poorly defined. Stop UAT, fix the critical issues, run another QA cycle, and then restart UAT on the stabilized build. Continuing UAT on a broken build wastes testers' time and erodes their confidence. This is painful but less painful than launching a product that fails in production.
Can UAT be automated?
The execution of UAT test cases can be partially automated using tools like Selenium, Playwright, or Cypress, but the judgment aspect of UAT — does this make sense for our business? — cannot be automated. Automated end-to-end tests are better classified as regression tests or integration tests. True UAT requires human evaluation by someone who understands the business context.
How do you handle UAT for agile or continuous delivery teams?
In agile teams, UAT can happen within each sprint rather than as a separate phase. The product owner reviews and accepts completed stories during the sprint, using the acceptance criteria defined during planning. For continuous delivery, consider a staged rollout: deploy to a subset of users (canary deployment or feature flags), gather feedback, and iterate. The key principle remains the same — someone with business authority verifies the software meets requirements before full release.
Resources and Further Reading
- ISTQB Foundation Level Syllabus The international standard for software testing certification, including UAT processes
- Atlassian: User Acceptance Testing Guide Atlassian's overview of testing types including UAT in the context of continuous delivery
- Marker.io - Visual Feedback for Websites Visual bug reporting tool that captures annotated screenshots with browser metadata, ideal for UAT testers
- TestRail - Test Case Management Test management platform for organizing UAT test cases, tracking execution, and generating reports