Website QA intelligence for teams who ship
Guides Tool Comparisons QA Glossary Archive RSS Feed
HomeGlossaryInteraction to Next Paint

Interaction to Next Paint (INP)

Interaction to Next Paint (INP) is a Core Web Vital metric that measures the responsiveness of a web page by tracking the latency between user interactions and the browser's next visual update. It evaluates all user interactions throughout a page session and reports the 98th percentile value, representing the worst interaction delays users experience. INP captures the complete interaction lifecycle from input detection through JavaScript processing to visual rendering.

Interaction to Next Paint measures the full processing pipeline for user interactions, starting when a user clicks, taps, or presses a key, continuing through JavaScript event handler execution, and ending when the browser paints the next frame showing the visual response. Unlike its predecessor First Input Delay, which only measured input delay for the first interaction, INP evaluates every click, tap, and keypress during a page session. The metric reports the 98th percentile of all interaction delays, ensuring it captures the sluggish interactions that frustrate users rather than just average performance. Google considers INP values of 200 milliseconds or less as good, 200-500 milliseconds as needing improvement, and over 500 milliseconds as poor.

For QA teams, INP directly impacts user satisfaction and business outcomes, particularly in e-commerce and form-heavy applications where interaction responsiveness determines conversion rates. Poor INP scores can signal underlying performance issues that affect critical user journeys like checkout processes, form submissions, or interactive product configurators. In regulated industries, slow interaction responses can compound usability issues that impact compliance with accessibility standards. QA teams must test INP across different devices and network conditions since mobile devices with limited processing power often exhibit worse INP scores than desktop environments.

Common mistakes include conflating INP with page load speed metrics, testing only on high-end development machines, and ignoring the cumulative effect of multiple interactions during a session. Teams often overlook that INP measures visual updates, so interactions that trigger invisible background processes may show artificially good scores while still providing poor user experience. Another frequent error is testing INP in isolation rather than during realistic user workflows where JavaScript execution contexts and DOM states vary significantly. Many teams also fail to account for third-party scripts and analytics tools that can block the main thread during interaction processing.

INP connects directly to overall site quality and delivery workflows because it reflects real-world user experience under actual usage conditions. Unlike synthetic load time metrics, INP reveals how applications perform when users actively engage with content, making it essential for validating interactive features before release. Teams should integrate INP monitoring into their continuous integration pipelines and establish performance budgets that prevent regressions. Poor INP scores often indicate broader architectural issues like inefficient state management, excessive DOM complexity, or inadequate code splitting that impact multiple aspects of application performance and maintainability.

Why It Matters for QA Teams

Users expect instant feedback when they click or type. A sluggish response to interactions makes a website feel broken even if the content loads quickly. INP measures exactly this perceived sluggishness.

Example

A QA team at a major retailer discovers their product comparison tool shows good Core Web Vitals scores in automated testing but receives customer complaints about sluggish behavior. During manual testing, they find that clicking to add a fourth product to the comparison triggers a 800ms delay before the interface updates. Investigation reveals that each comparison addition forces a complete re-render of all product cards, with JavaScript recalculating layouts and prices for every item. While the initial page load performs well and the first product addition feels responsive, the INP score deteriorates as users interact more deeply with the feature. The team realizes their synthetic monitoring only tested single interactions, missing this cumulative performance degradation that real users experience during their shopping journey.