Website QA intelligence for teams who ship
Guides Tool Comparisons QA Glossary Archive RSS Feed
HomeGuidesHow to Write Good Bug Reports: A 2026 Guide for QA Professionals

How to Write Good Bug Reports: A 2026 Guide for QA Professionals

Write bug reports that developers actually want to read, with clear reproduction steps and actionable detail

Last updated: 2026-05-15 05:02 UTC 11 min read
Key Takeaways
  • Why Bug Report Quality Directly Affects Fix Speed
  • Anatomy of an Effective Bug Report
  • Writing Reproduction Steps That Actually Work
  • Capturing Effective Evidence
  • Severity Classification and Prioritization

Why Bug Report Quality Directly Affects Fix Speed

A well-written bug report is the single most impactful skill a QA professional can develop. The difference between a good bug report and a poor one is not cosmetic - it directly determines how quickly a defect gets fixed, or whether it gets fixed at all.

Poor bug reports create a costly cycle:

  • Developer reads the report, cannot reproduce the issue, assigns it back to QA for more information
  • QA adds details, developer tries again, asks follow-up questions
  • Multiple rounds of back-and-forth before any code is written

This cycle wastes hours per defect. Across a team filing 20-30 bugs per sprint, poor reports can waste entire days of developer time. A bug report that takes you 10 extra minutes to write properly can save a developer 2 hours of investigation.

The standard for a good bug report is simple: a developer who has never seen the issue should be able to reproduce it on the first attempt using only the information in your report. If they need to ask you a single follow-up question, the report can be improved.

This guide covers the structure, writing practices, and tools that produce consistently high-quality bug reports.

Anatomy of an Effective Bug Report

Every bug report needs these components. Treat this as your minimum viable bug report - missing any of these fields means the report is incomplete.

Title: A concise summary that identifies the specific problem. Bad: "Checkout broken." Good: "Promo code SAVE20 applies 20% discount twice when cart contains both physical and digital items."

Environment: Where you observed the bug. Include browser (with version), operating system, device (if mobile), and environment URL (staging, production, PR preview).

Steps to reproduce: Numbered, specific actions that reliably trigger the bug. Start from a known state (e.g., "Log in as test user qa-user-01"). Include exact input values, not descriptions of input values.

Expected result: What should happen according to the spec, design, or common sense.

Actual result: What actually happens. Be precise - "an error appears" is not sufficient. "A red banner displays the message 'Something went wrong. Please try again.' and the form data is cleared" is.

Severity/Priority: Your assessment of impact and urgency. Use your team's agreed-upon scale consistently.

Evidence: Screenshots, screen recordings, console logs, network request/response data. Visual evidence eliminates ambiguity. Always include it.

Additional context: Does it reproduce every time? Only on certain devices? Did it work previously? When did it start failing?

Writing Reproduction Steps That Actually Work

Reproduction steps are the most critical part of a bug report and the most commonly done poorly. Follow these rules:

Rule 1: Start from zero. Do not assume any prior state. Begin with: "Navigate to [URL]" or "Log in as [specific user]." If the bug requires specific test data, specify exactly what data to use or how to create it.

Rule 2: One action per step. Each step should be a single user action. Not "Fill out the form and submit it" but:

  • Step 3: Enter "john@example.com" in the Email field
  • Step 4: Enter "Test123!" in the Password field
  • Step 5: Click the "Sign In" button

Rule 3: Use exact values. Do not write "Enter a valid email." Write "Enter test-user@example.com." Exact values eliminate variables. If the bug only occurs with specific data, this is how you communicate that.

Rule 4: Include wait conditions. If timing matters, say so. "Wait for the page to fully load (spinner disappears)" or "Wait 30 seconds for the session to time out."

Rule 5: Verify before filing. Follow your own steps from scratch in a clean browser session (incognito/private mode). If you cannot reproduce the bug following your own steps, they are incomplete. Revise until you can reproduce it reliably.

For intermittent bugs: Note the reproduction rate ("reproduces approximately 3 out of 10 attempts") and any patterns you have observed ("more frequent on slow connections" or "only seen during peak traffic hours").

Capturing Effective Evidence

A screenshot is worth a thousand words in a bug report, but only if it is the right screenshot. Here is how to capture evidence that accelerates debugging:

Screenshots:

  • Annotate screenshots to highlight the specific issue. Draw an arrow or circle around the problem area. A full-page screenshot with no annotation forces the developer to play "find the bug."
  • Capture the full browser window including the URL bar when possible - this confirms the environment and page.
  • For layout issues, include a screenshot of the expected state alongside the broken state if available (from the design or a working environment).

Screen recordings:

  • Use screen recordings for interaction bugs, animation issues, or multi-step flows. A 15-second video can replace a paragraph of description.
  • Tools: built-in screen recording (macOS: Cmd+Shift+5, Windows: Win+G), Loom, or browser extensions like Screencastify.
  • Keep recordings short and focused. Start recording just before the relevant action, not from the beginning of a 5-minute test flow.

Console and network data:

  • Open DevTools Console panel and check for JavaScript errors related to the bug. Include console output in your report.
  • For data issues, check the Network panel. Copy the relevant API request and response (right-click > Copy > Copy as cURL). This is often the single most useful piece of evidence for backend bugs.
  • For visual bugs, include the computed CSS values for the affected element from the Elements panel.

Use a feedback tool: Tools like Marker.io or BugHerd automatically capture URL, browser info, console logs, and screenshots together, reducing the manual effort of evidence collection.

Severity Classification and Prioritization

Consistent severity classification ensures the right bugs get fixed first. Agree on definitions with your team and apply them uniformly.

Common severity levels:

  • Critical / S1: Application is unusable, data loss occurs, security vulnerability exposed, payment processing broken. No workaround exists. Example: checkout button throws a 500 error for all users.
  • Major / S2: Core functionality is broken but a workaround exists, or a significant feature is unusable. Example: search returns no results for queries containing special characters, but users can browse categories instead.
  • Minor / S3: Non-core functionality is broken, cosmetic issues that affect usability, or edge cases in important features. Example: date picker does not work on Safari but the date can be typed manually.
  • Trivial / S4: Cosmetic issues with no functional impact. Example: inconsistent font size in the footer, minor alignment issue on one page.

Severity vs. Priority: Severity measures technical impact. Priority measures business urgency. A trivial typo on the homepage might have low severity but high priority because it is visible to every visitor. A critical bug in an admin feature used by two people might have high severity but lower priority. Discuss both dimensions - severity is QA's call; priority is often a product decision.

Avoid severity inflation. If every bug is marked Critical, nothing is. Track your severity distribution: a healthy ratio is roughly 5% Critical, 20% Major, 50% Minor, 25% Trivial. If your distribution skews heavily toward Critical, your calibration needs adjustment.

Common Bug Report Mistakes and How to Fix Them

After reviewing thousands of bug reports across multiple teams, these are the patterns that consistently slow down resolution:

1. Vague titles. "Login does not work" could mean a hundred different things. Be specific about the failure mode: "Login fails with 'Invalid credentials' error when password contains a backslash character."

2. Missing negative space. Report what you tested that did work. "Reproduces on Chrome 124 and Firefox 126. Does NOT reproduce on Safari 18." This narrows the investigation scope immediately.

3. Mixing multiple bugs in one report. One report per bug, always. If you found three issues during the same testing session, file three reports. Combined reports get partially fixed and then sit in limbo.

4. Assuming the cause. Report what you observed, not what you think the code is doing wrong. "The API returns a 500 error" is an observation. "The database query is probably timing out" is speculation. Developers will investigate the cause - your job is to describe the symptom precisely.

5. No reproduction rate. "Sometimes the modal does not close" is not actionable. "The modal fails to close approximately 1 in 5 times when clicking the X button rapidly" gives the developer enough information to investigate a timing/race condition.

6. Stale environment information. Always include the exact URL and build version or commit hash where you observed the bug. "Staging" is not specific enough if staging is redeployed daily.

Frequently Asked Questions

What information should every bug report include at minimum?

At minimum: a specific title, environment details (browser, OS, URL), numbered reproduction steps, expected result, actual result, and at least one piece of evidence (screenshot, screen recording, or console log). If any of these are missing, the report is incomplete and will likely require follow-up questions.

How do we report bugs that we cannot reproduce consistently?

File the report with your best reproduction steps, note the reproduction rate (e.g., '2 out of 10 attempts'), document any patterns you've noticed, include all available evidence from when it did occur, and mark it as intermittent. Intermittent bugs are harder to fix but still valuable to track - they often indicate race conditions or environment-specific issues.

Should QA suggest fixes in bug reports?

Include observations that might help debugging (such as a specific error in the console or an API returning unexpected data), but avoid prescribing code fixes unless you are certain of the cause. Wrong fix suggestions can send developers down the wrong path. Focus on describing the problem precisely and providing evidence.

How detailed should reproduction steps be for obvious bugs?

Equally detailed. What seems obvious to you may not be obvious to the developer assigned to fix it, especially if they are unfamiliar with that feature area. Detailed steps also serve as documentation - when the bug is retested after a fix, the tester needs clear steps regardless of whether they filed the original report.

What tools help capture better bug report evidence?

Marker.io and BugHerd automatically capture browser info, console logs, and screenshots. Loom or native screen recording captures interaction bugs. Chrome DevTools Network panel provides API request/response data. For visual bugs, browser DevTools Elements panel shows computed CSS values. Using these consistently improves report quality with minimal extra effort.

Resources and Further Reading

  • Marker.io - Website Bug Reporting Tool Visual feedback tool that captures annotated screenshots, browser metadata, and console logs directly from the website.
  • Loom Screen Recording Quick screen recording tool ideal for capturing bug reproduction videos to attach to defect reports.
  • Jira Bug Report Best Practices Atlassian's guide to configuring and writing effective bug reports in Jira, the most widely used issue tracker.
  • Chrome DevTools Documentation Official documentation for Chrome DevTools, essential for capturing console errors, network data, and performance information for bug reports.