Website QA intelligence for teams who ship
Guides Tool Comparisons QA Glossary Archive RSS Feed
HomeGuidesThe Complete Website QA Checklist (2026)

The Complete Website QA Checklist (2026)

Everything your team needs to check before shipping

Last updated: 2026-03-17 21:01 UTC 16 min read
Key Takeaways
  • Why You Need a Structured QA Checklist
  • 1. Functional Testing
  • 2. Visual and UI Testing
  • 3. Forms and Input Handling
  • 4. Responsive Design and Cross-Browser Testing

Why You Need a Structured QA Checklist

Every web team has shipped something broken. A form that silently fails. A page that looks fine on your laptop but collapses on an iPhone SE. A checkout flow that works in Chrome but throws a JavaScript error in Safari. These are not edge cases — they are the predictable result of testing without a system.

A structured QA checklist does three things that ad-hoc testing cannot:

  • Consistency: Every release gets the same baseline scrutiny, regardless of who is testing or how much time pressure the team is under.
  • Coverage: Categories of defects that are easy to forget — accessibility, SEO metadata, security headers — get checked every time.
  • Accountability: When something does slip through, the team can identify whether the checklist missed it (a process gap) or whether the checklist was skipped (a discipline gap). Both are fixable.

This checklist is organized into ten categories. Not every item applies to every project — a marketing landing page has different needs than a SaaS dashboard — but the categories themselves are universal. Use this as a starting template and adapt it to your team's stack and risk profile.

1. Functional Testing

Functional testing verifies that features do what they are supposed to do. This is the most intuitive category of QA — clicking buttons, submitting forms, navigating between pages — but it is also where teams most often cut corners by testing only the happy path.

Core checkpoints:

  • All navigation links resolve to the correct destination (no 404s, no incorrect routes)
  • All buttons trigger their intended action (submit, cancel, delete, toggle, etc.)
  • All forms submit successfully with valid data and return appropriate confirmation
  • All forms reject invalid data with clear, specific error messages (not just "Invalid input")
  • Required fields are enforced — test by leaving each required field blank individually
  • File upload fields accept the correct file types and reject others; test with oversized files
  • Search functionality returns relevant results and handles empty queries gracefully
  • Pagination works correctly — first page, last page, middle pages, and edge cases like page 0 or page 99999
  • Sorting and filtering controls produce correct results and persist across pagination
  • Authentication flows work end-to-end: registration, login, logout, password reset, email verification
  • Role-based access control is enforced — verify that a regular user cannot access admin routes by manually entering the URL
  • Third-party integrations (payment gateways, CRMs, analytics events) fire correctly
  • Session expiration and timeout behavior works as intended
  • Deep links and bookmarked URLs resolve correctly, including those with query parameters or hash fragments

Tip: For each feature, test at least three scenarios: the happy path, one invalid input, and one boundary condition (e.g., maximum character length, zero items in a cart, the 10,001st record).

2. Visual and UI Testing

Visual bugs are among the most commonly reported issues in UAT because stakeholders notice them immediately. A misaligned button or a truncated heading undermines confidence in the entire product, even if the underlying logic is flawless.

Core checkpoints:

  • Typography is consistent: font families, sizes, weights, and line heights match the design system
  • Color values match specifications — check both light and dark modes if applicable
  • Spacing and alignment follow the established grid; look for inconsistent padding and margins
  • Images and icons render at the correct size and resolution; verify retina/2x assets load on high-DPI screens
  • No content is clipped, overlapping, or hidden behind other elements
  • Hover, focus, active, and disabled states are visually distinct and match design specs
  • Loading states (spinners, skeleton screens, progress bars) display during async operations
  • Empty states are designed — not just blank white space — for lists, search results, dashboards, and inboxes
  • Error states are visually clear: form validation errors, failed API calls, 404 and 500 pages
  • Animations and transitions are smooth (no jank), purposeful, and respect prefers-reduced-motion
  • Favicon, Open Graph images, and social sharing previews render correctly
  • Print stylesheet exists and produces usable output for pages users are likely to print (invoices, articles, receipts)

Tool tip: Visual regression testing tools like Percy, Chromatic, or BackstopJS can automate screenshot comparisons between builds. They do not replace human review, but they catch unintended changes in components that nobody thought to manually check.

3. Forms and Input Handling

Forms are where users hand you their data and their trust. A broken form is not just a bug — it is a lost lead, a failed transaction, or a support ticket. Test forms more thoroughly than you think is necessary.

Core checkpoints:

  • Tab order follows a logical sequence through the form (left to right, top to bottom, grouped logically)
  • Auto-complete attributes are set correctly (autocomplete="email", autocomplete="tel", etc.) so browsers and password managers can assist users
  • Input masks and formatting helpers (phone numbers, credit cards, dates) work correctly and do not fight the user's input
  • Character limits are enforced on both the client and the server — test by bypassing client-side validation via browser dev tools
  • Pasting into fields works correctly, including fields with input masks
  • Multi-step forms preserve data when navigating back to a previous step
  • Multi-step forms handle browser back button behavior gracefully
  • Form submission is protected against double-clicks (disable the button after first click or use idempotency keys)
  • CAPTCHA or anti-bot measures work correctly and do not block legitimate users
  • File upload fields show progress indicators for large files
  • Dropdown menus with long lists are searchable or otherwise usable (a dropdown with 200 countries and no search is hostile)
  • Date pickers handle timezone differences, internationalized date formats, and edge dates (Feb 29, Dec 31)
  • Conditional form fields (show field B when field A has value X) toggle correctly and do not submit hidden field values
  • Success confirmation is clear: what happened, what happens next, and what action the user can take

Common miss: Many teams test forms in isolation but not in context. Test form submission from the actual page where it appears, with real-ish data, including international characters (accents, CJK characters, right-to-left scripts) and special characters that could cause escaping issues (&, <, ", ').

4. Responsive Design and Cross-Browser Testing

Responsive testing is not just "make the window smaller." It is verifying that your application is usable on the devices your actual users use, in the browsers they actually run. Start with analytics data to identify your top device/browser combinations, and test those first.

Core checkpoints:

  • Test at standard breakpoints: 320px (small phone), 375px (iPhone), 414px (large phone), 768px (tablet portrait), 1024px (tablet landscape / small laptop), 1280px (laptop), 1440px (desktop), 1920px (large desktop)
  • Navigation is usable on all breakpoints — hamburger menus open/close, submenus are reachable, active states are visible
  • Touch targets are at least 44x44 CSS pixels on mobile (per Apple HIG and WCAG 2.5.8)
  • No horizontal scrolling occurs at any standard viewport width (unless explicitly designed, e.g., data tables with horizontal scroll)
  • Text remains readable without zooming on mobile — minimum 16px for body text is a safe baseline
  • Images scale correctly and do not overflow their containers
  • Tables either scroll horizontally within a container or reflow into a stacked layout on small screens
  • Modals, popovers, and tooltips are fully visible and dismissible on small screens
  • Fixed or sticky elements (headers, CTAs, chat widgets) do not obscure content on small viewports
  • Test in real browsers, not just Chrome DevTools device emulation: Safari on iOS (WebKit rendering differences are real), Chrome on Android, Firefox, Edge
  • Test with both portrait and landscape orientations on tablets
  • Verify that viewport meta tag is set correctly: <meta name="viewport" content="width=device-width, initial-scale=1">

Browser-specific issues to watch for:

  • Safari: position: sticky inside overflow containers, date input rendering, 100vh including the address bar, Web Push notification differences
  • Firefox: Slightly different form element rendering, scrollbar styling limitations
  • Edge: Generally matches Chrome (both Chromium-based), but test if you use any Chromium-specific APIs
  • Samsung Internet: Non-trivial market share on Android; test if your analytics show significant Samsung traffic

Use a service like BrowserStack or LambdaTest for device coverage you cannot maintain in-house.

5. Accessibility Testing (Baseline)

This section covers baseline accessibility checks that every QA team should perform. For a deep dive into WCAG compliance, see our WCAG 2.2 Compliance Guide.

Core checkpoints:

  • All images have alt attributes — descriptive for informative images, empty (alt="") for decorative images
  • The page has exactly one <h1>, and heading levels do not skip (no jumping from <h2> to <h4>)
  • All form inputs have associated <label> elements (using for/id pairing, not just placeholder text)
  • Color contrast ratios meet WCAG AA minimums: 4.5:1 for normal text, 3:1 for large text (18px bold or 24px regular)
  • Interactive elements are reachable and operable via keyboard alone — tab through the entire page and verify every control
  • Focus indicators are visible on all interactive elements (do not use outline: none without a replacement)
  • ARIA attributes are used correctly — aria-label, aria-expanded, aria-hidden — and are not used as a substitute for semantic HTML
  • Page landmarks are present: <header>, <nav>, <main>, <footer>
  • Skip navigation link exists and works (hidden visually but accessible to screen readers and keyboard users)
  • Dynamic content changes (notifications, live regions, error messages) are announced to screen readers using aria-live
  • Video content has captions; audio content has transcripts

Quick automated scan: Run axe DevTools or the Lighthouse accessibility audit on every page. These tools catch approximately 30-40% of accessibility issues automatically. The remaining 60-70% require manual testing — keyboard navigation, screen reader testing, and cognitive review.

Screen reader testing (even a quick pass helps):

  • macOS: VoiceOver (built in, activate with Cmd + F5)
  • Windows: NVDA (free) or JAWS
  • Mobile: VoiceOver on iOS, TalkBack on Android

6. Performance Testing

Performance is a feature. A page that takes 6 seconds to become interactive on a mid-range phone over a 4G connection will lose users regardless of how polished the design is. Performance testing should be a standard part of every QA cycle, not a one-time audit.

Core checkpoints:

  • Run Lighthouse performance audit on key pages: homepage, product/landing pages, checkout, dashboard. Target scores above 90 on desktop and above 75 on mobile.
  • Measure Core Web Vitals and verify they meet Google's thresholds:
  • Images are optimized: served in modern formats (WebP or AVIF), appropriately sized (not serving a 2000px image in a 400px container), and lazy-loaded below the fold
  • Fonts are optimized: use font-display: swap or font-display: optional, subset fonts to required character ranges, preload critical fonts
  • JavaScript bundle size is reasonable — audit with your bundler's analysis tool (webpack-bundle-analyzer, Vite's rollup-plugin-visualizer). Watch for accidental inclusion of large dependencies
  • CSS is not render-blocking unnecessarily — critical CSS is inlined or loaded early, non-critical CSS is deferred
  • HTTP caching headers are set correctly: static assets have long cache lifetimes with content-hashed filenames; HTML responses have short or no-cache policies
  • Gzip or Brotli compression is enabled for text-based assets (HTML, CSS, JS, JSON, SVG)
  • No excessive or waterfall network requests — check the Network tab for sequential chains that could be parallelized
  • Third-party scripts (analytics, chat widgets, ad tags) are loaded asynchronously and do not block rendering
  • Test on a throttled connection: Chrome DevTools network throttling or slow-3g preset to simulate real-world conditions

Tool recommendations:

  • Lab testing: Lighthouse, WebPageTest, Chrome DevTools Performance panel
  • Field data: Google Search Console Core Web Vitals report, Chrome UX Report (CrUX), real user monitoring (RUM) via your analytics platform
  • Continuous monitoring: SpeedCurve, Calibre, or Lighthouse CI integrated into your deployment pipeline

7. SEO and Metadata Checks

QA teams are often the last line of defense against SEO regressions. A missing canonical tag or an accidentally noindexed page can undo months of marketing effort. These checks take minutes and prevent expensive mistakes.

Core checkpoints:

  • Every page has a unique, descriptive <title> tag (50-60 characters)
  • Every page has a unique <meta name="description"> tag (120-160 characters)
  • Canonical tags (<link rel="canonical">) are present and point to the correct URL
  • Open Graph tags are set for social sharing: og:title, og:description, og:image, og:url
  • Twitter Card tags are set: twitter:card, twitter:title, twitter:description, twitter:image
  • robots.txt is accessible and not accidentally blocking important paths
  • sitemap.xml is present, accessible, and includes all important pages
  • No important pages have <meta name="robots" content="noindex"> in production (common mistake when copying from staging)
  • Heading hierarchy is logical: one <h1> per page that contains the primary keyword
  • Internal links use descriptive anchor text (not "click here")
  • Structured data (JSON-LD) is valid — test with Google's Rich Results Test tool
  • Hreflang tags are correct for multi-language sites
  • 301 redirects are in place for changed URLs; no redirect chains or loops
  • Images have descriptive filenames and alt text that includes relevant keywords naturally

Common staging-to-production mistakes:

  • Staging noindex tags left in production templates
  • Canonical tags pointing to the staging domain
  • Sitemap referencing staging URLs
  • Analytics tracking code missing or pointing to a test property
  • Base URL hardcoded to staging in JavaScript or API calls

8. Security Checks (QA-Level)

QA teams are not penetration testers, but there are security fundamentals that should be verified during every release cycle. These checks catch common misconfigurations and vulnerabilities that automated scanners may miss.

Core checkpoints:

  • HTTPS is enforced on all pages — HTTP requests redirect to HTTPS with a 301
  • HSTS header is set: Strict-Transport-Security: max-age=31536000; includeSubDomains
  • Cookies are set with Secure, HttpOnly, and appropriate SameSite attributes
  • No sensitive data (API keys, tokens, passwords, internal URLs) is exposed in client-side source code, JavaScript bundles, or HTML comments
  • Content Security Policy (CSP) header is set and does not use unsafe-inline or unsafe-eval unnecessarily
  • X-Content-Type-Options header is set to nosniff
  • X-Frame-Options or CSP frame-ancestors directive prevents clickjacking
  • User input is not reflected in the page without sanitization (test for basic XSS by entering <script>alert(1)</script> in form fields and URL parameters)
  • File upload endpoints validate file type on the server side, not just the client side
  • Error pages and API error responses do not leak stack traces, database queries, or server paths
  • Rate limiting is in place on authentication endpoints and form submissions
  • CORS headers are not set to Access-Control-Allow-Origin: * on endpoints that return sensitive data
  • After logout, session tokens are invalidated — verify by saving a session cookie, logging out, and replaying the cookie

Quick security scan tools:

  • Mozilla Observatory: Scans HTTP headers and provides a security grade (observatory.mozilla.org)
  • Security Headers: Quick check of response headers (securityheaders.com)
  • OWASP ZAP: Free automated security scanner for more thorough testing

9. Content and Copy Review

Content bugs are real bugs. A typo in a headline, placeholder "Lorem ipsum" text left in production, or a broken link in a footer are all defects that erode user trust. Content review should be a formal step in the QA process, not something that happens informally over someone's shoulder.

Core checkpoints:

  • No placeholder text (Lorem ipsum, "TBD", "TODO", "[Insert here]") remains in production content
  • No placeholder images or broken image references
  • Spelling and grammar have been reviewed (use a tool like Grammarly, LanguageTool, or a manual review)
  • Legal content is present and correct: privacy policy, terms of service, cookie consent, copyright year
  • Contact information is accurate: email addresses, phone numbers, physical addresses
  • Pricing information is current and matches what the billing system will actually charge
  • Links in body content resolve correctly and do not point to staging, localhost, or dead URLs
  • Download links serve the correct file and the file is not corrupted
  • Date formats are consistent and appropriate for the target locale
  • Number formats (currency, percentages, units) are consistent and locale-appropriate
  • Notification and transactional email content is correct — actually trigger the emails and read them
  • Error messages are helpful and do not blame the user
  • Microcopy (button labels, tooltips, empty states) is consistent in tone and terminology

10. Running the Checklist: Process Tips

A checklist is only useful if it is actually used. Here are practical tips for integrating this checklist into your team's workflow.

Prioritize by risk: Not every item on this list needs to be checked for every release. A copy change on an About page does not need a full security audit. Categorize your releases by scope (full release, feature release, hotfix, content update) and define which checklist sections apply to each.

Assign ownership: If "everyone" is responsible for QA, nobody is. Assign specific checklist sections to specific people. A frontend developer might own visual/responsive checks while a QA specialist owns functional and accessibility testing.

Test in the right environment: QA against a staging environment that mirrors production as closely as possible — same server configuration, same CDN, same third-party integrations, same (anonymized) data volume. Testing against a local development server with 10 records in the database will not catch production issues.

Document your findings: Use a visual feedback tool like Marker.io or BugHerd to capture bugs with screenshots, browser metadata, and console logs automatically. This eliminates the back-and-forth of "what browser were you using?" and "can you send me a screenshot?"

Track your metrics: Over time, track which checklist categories produce the most bugs. If 40% of your defects are form-related, invest in better form testing automation or training. If accessibility issues keep recurring, the problem might be upstream in design or development.

Automate what you can: Many items on this checklist can be partially or fully automated:

  • Broken link checking: Screaming Frog, linkchecker, or custom scripts
  • Accessibility scanning: axe-core in CI/CD, Pa11y Dashboard
  • Visual regression: Percy, Chromatic, BackstopJS
  • Performance monitoring: Lighthouse CI, SpeedCurve
  • Security headers: Mozilla Observatory API, custom header checks in your test suite

Automation handles the repetitive baseline; human testers focus on the judgment calls — usability, flow logic, edge cases, and whether the thing actually makes sense to a real person.

Frequently Asked Questions

How often should you run QA testing?

At minimum, run the full checklist before every major release. For teams practicing continuous deployment, integrate automated checks (accessibility scans, visual regression, performance budgets) into your CI/CD pipeline so they run on every pull request. Manual QA should happen on a regular sprint cadence — typically before each sprint's release — and a full-depth manual pass should happen before any major launch or redesign.

How long does a full QA cycle take?

For a medium-complexity website (20-50 pages, forms, authentication, integrations), a thorough manual QA pass typically takes 2-4 days for one tester. This varies significantly based on the scope of changes, the number of supported browsers and devices, and how much automation is in place. Automated checks can run in minutes as part of CI/CD, but manual testing — especially accessibility and usability review — requires dedicated time.

Should developers do their own QA?

Developers should test their own work before handing it off, but they should not be the only testers. Developers have blind spots — they know how the feature is supposed to work and unconsciously avoid the paths that break it. A dedicated QA tester or a peer who did not build the feature will approach it differently and catch issues the developer would never think to test.

What is the difference between QA testing and UAT?

QA testing is typically performed by the development or QA team to verify that the software works correctly according to technical specifications. User Acceptance Testing (UAT) is performed by the end client or business stakeholders to verify that the software meets their business requirements and is ready for production. QA asks 'does it work right?' while UAT asks 'did we build the right thing?'

What tools do QA teams need?

At minimum: a bug tracking system (Jira, Linear, GitHub Issues), a visual feedback tool for capturing annotated screenshots and browser metadata (Marker.io, BugHerd), a cross-browser testing service (BrowserStack, LambdaTest), and accessibility testing tools (axe DevTools, WAVE). For more mature teams, add visual regression testing (Percy, Chromatic), performance monitoring (Lighthouse CI, SpeedCurve), and test management software (TestRail, Zephyr).

Resources and Further Reading