Blog

Service desks running ServiceNow have adopted the language of AI-era performance: deflection rates, Now Assist accuracy, FCR, knowledge utilisation. What most have not confronted is that six of these seven KPIs are only measurable if every remote support session generates structured, session-level data, and most service desks cannot produce that data because their remote support tool was never built to generate it.
This is not a reporting problem, nor is it a dashboard problem. It is a data architecture problem, and it begins the moment a technician opens a TeamViewer or Bomgar session from outside the ServiceNow incident record. When that session closes, what actually writes back to the ticket? A timestamp, whatever the technician chose to type, and sometimes nothing beyond that. Six of your seven KPIs are being calculated from exactly that foundation.
The seven KPIs that service desks are now measured against, namely FCR, AHT, MTTR, Escalation Rate, KB Article Creation Rate, Virtual Agent Deflection Rate, and Now Assist Recommendation Accuracy, place very different demands on your infrastructure, and understanding those demands is where most service desks stop short.
Escalation Rate sits in a category of its own, because the headline number is partially trackable from ticket routing alone. When a ticket moves between assignment groups, you can see that an escalation occurred, which issue it involved, and which team it landed with. What ticket routing cannot tell you is why the escalation was necessary in the first place: which issue types consistently push beyond what Tier 1 can handle, which agents encounter the same ceiling across different incidents, and whether the escalation reflected genuine complexity or simply a gap in documented knowledge from a previous session that never made it into the record.
The remaining six share a dependency that most service desks have never had reason to name directly. Each one requires session-level structured data written back to the ServiceNow incident record, and the relationship between them is cumulative rather than parallel. Without sessions producing structured data, FCR, AHT, and MTTR remain timestamp gaps dressed up as performance metrics. Without those metrics grounded in real session output, KB Article Creation Rate and Virtual Agent Deflection Rate have no knowledge foundation from which to grow. And without both of those layers in place, Now Assist Recommendation Accuracy becomes a figure derived from whatever three-word resolution notes the agent managed to enter before moving to the next ticket.
Most service desks are reporting confidently from the top of that structure while running on infrastructure that was never built to support its base. The numbers look credible because they are calculated, and calculated figures have a way of being accepted as real ones.
What it measures: Whether an employee's issue was resolved the first time they contacted the service desk, without requiring a follow-up interaction or a repeat incident.
Why legacy tools fall short: A service desk running TeamViewer or Bomgar can show FCR trending upward across consecutive quarters while the underlying picture tells a different story through reopen rates, escalation volume, and the same employee logging the same incident a fortnight later. FCR calculated from a closed ticket count records how many tickets were closed by the first agent who handled them, and says nothing about whether the issue was genuinely resolved or whether the resolution held once the ticket was closed.
What good tracking requires: Accurate FCR requires resolution outcome data tied to the session: what the technician observed, what was attempted, and what the device state looked like when the fix held. When a session runs in a disconnected tool and the agent updates the ticket with a status change, the incident record receives a timestamp and nothing of diagnostic value. The FCR figure moves while the quality of resolution beneath it stays invisible.
How ScreenMeet changes this: When ScreenMeet sessions run natively inside ServiceNow, resolution outcome data, including device state at the point of fix and the full sequence of actions taken, writes back to the incident record automatically, so that FCR reflects whether the issue was genuinely resolved rather than simply whether the ticket was closed on first assignment.
What it measures: How long agents spend resolving incidents, used to assess productivity, identify training needs, and benchmark performance across teams and issue categories.
Why legacy tools fall short: Most service desk managers know their AHT figure. Far fewer can explain where that time goes. An eighteen-minute average describes two entirely different realities equally well: a well-handled session on a genuinely complex issue, and a session where the technician spent twelve minutes searching for a resolution that already existed in the knowledge base.
What good tracking requires: Meaningful AHT is disaggregated by issue type, device context, resolution path, and agent, because that is the version that reveals whether Tier 1 agents are handling the work they are assigned to or absorbing time that should have been deflected before reaching them. Without structured session data writing back into ServiceNow, AHT remains a throughput figure with no explanatory power.
How ScreenMeet changes this: When ScreenMeet writes session duration against issue type, device context, and resolution path into the ServiceNow incident record, AHT becomes a metric that shows where agent time is going and which issue categories are driving it, rather than a raw average that managers can state but cannot act on.
What it measures: The average time taken to fully resolve an incident from the point it is logged to the point the underlying issue is confirmed as fixed.
Why legacy tools fall short: MTTR commands more institutional trust than almost any other service desk metric, which makes it the most consequential one to miscalculate. An elapsed time between ticket creation and closure looks identical in a report whether the resolution was thorough and durable or whether the agent closed the ticket and the issue resurfaced within days.
What good tracking requires: Meaningful MTTR requires the session start time, the resolution sequence, and confirmation that the resolution held. When a session runs outside ServiceNow, the incident record captures the creation and closure timestamps, while the diagnostic work between them, the logs reviewed, the commands run, the configuration changes made, exists only in the technician's memory and nowhere else.
How ScreenMeet changes this: Because ScreenMeet captures the complete resolution sequence inside the ServiceNow incident record, MTTR reflects the actual time to a verified resolution rather than the gap between two administrative timestamps, and when the metric moves in the wrong direction, the data to understand why is already in the record.
What it measures: The proportion of incidents that cannot be resolved at the first tier and must be passed to more senior teams, used to assess Tier 1 capability, training gaps, and the effectiveness of the knowledge base in enabling first-tier resolution.
Why legacy tools fall short: Ticket routing produces a reliable escalation count, but no insight into the pattern behind it: which issue categories consistently exceed Tier 1 capability, which agents hit the same ceiling across similar incidents, and whether those escalations reflect genuine complexity or undocumented knowledge from a prior session that never made it into the record.
What good tracking requires: When an escalated ticket arrives at Tier 2, the receiving engineer works from the same thin record that triggered the routing decision. Whatever the Tier 1 technician attempted, in what sequence, and what the device looked like at the point of escalation, none of it travels with the ticket. Now Assist reads the same record and generates recommendations without visibility into work already done, so the Tier 2 engineer restarts the diagnostic from the beginning.
How ScreenMeet changes this: When ScreenMeet sessions run inside ServiceNow, the complete Tier 1 session activity travels with the escalated ticket, so the Tier 2 engineer sees exactly what was attempted and at what point the escalation was triggered, and Now Assist recommendations account for the diagnostic work already completed rather than treating the incident as newly logged.
What it measures: How actively the knowledge base is being built from resolved incidents and how frequently those articles are used by agents and the virtual agent to resolve issues without escalation.
Why legacy tools fall short: Low KB creation rates are almost always attributed to agent behaviour, but the actual cause is structural. The architecture places knowledge creation in direct competition with ticket throughput at the precise moment agents are least positioned to do both, and articles written from memory after a high-pressure session with more incidents queued are rarely written at all.
What good tracking requires: A knowledge base not fed by current session data accumulates articles that match symptom keywords but reflect configurations and software versions from years ago. Now Assist surfaces those alongside current articles with equivalent confidence, agents follow guidance that fails, trust erodes, and contribution falls further in a cycle that does not break until the source of knowledge entering the system changes.
How ScreenMeet changes this: ScreenMeet AI Summary converts the structured output of every completed session into a knowledge base article candidate automatically and without any action from the agent, so that creation rate and utilisation improve as a consequence of support operations rather than of a documentation programme competing with throughput for agent attention.
What it measures: The proportion of incidents resolved without a live technician, through the virtual agent or self-service content, reflecting the quality and currency of the knowledge base that underlies both.
Why legacy tools fall short: Deflection rate reports on the past. The capability it measures reflects the quality of knowledge that entered the knowledge base months ago, not what is being resolved today. Improving it requires knowing which issues are reaching live agents and how they were resolved, and that understanding is only accessible through session-level data written back into ServiceNow, not from resolution codes, ticket categories, or the notes an agent enters in the ninety seconds before moving to the next incident.
What good tracking requires: The resolution path taken inside the session is the signal that determines whether the virtual agent can handle the next occurrence of the same issue autonomously. When sessions produce no structured output, that signal never reaches the system, and the deflection rate has no mechanism for improvement because the feedback loop that would drive it was never established.
How ScreenMeet changes this: When ScreenMeet sessions write structured resolution data back into ServiceNow, the virtual agent draws from a knowledge base that grows with every session, and deflection rate becomes a metric with an active feedback loop that improves as session knowledge accumulates rather than stagnating between KB refresh projects.
What it measures: How reliably Now Assist surfaces correct and relevant guidance to agents and employees, directly dependent on the quality and completeness of the data inside ServiceNow incident records from which it retrieves.
Why legacy tools fall short: Now Assist reads ServiceNow incident records and builds recommendations from what it finds there. When a technician closes a BeyondTrust or TeamViewer session and updates the ticket with "Issue resolved. Rebooted," Now Assist generates a recommendation grounded in a reboot, because that is the only resolution context the record contains. If the actual fix required identifying a corrupted Group Policy Object, applying a registry edit, rebuilding a user profile, and engaging the directory services team, Now Assist has no way to know, because the session produced nothing that reached the ticket.
What good tracking requires: Now Assist accuracy deteriorates not through any model failure but through the gradual accumulation of incomplete inputs it has no way of identifying as incomplete. Improving that ceiling requires incident records containing the full diagnostic sequence, which is only possible when the remote support session generates structured data as a default output of running natively inside ServiceNow.
How ScreenMeet changes this: ScreenMeet AI Summary writes the complete resolution sequence, device state, and diagnostic steps into the incident record as structured data, so that Now Assist recommendations reflect what genuinely occurred in the session rather than what the agent had time to enter before moving to the next incident.
A service desk that cannot produce structured session data is not underperforming against its KPIs in the conventional sense. It is measuring approximations and presenting them as performance. Six of these seven metrics depend on a data layer that most remote support tools were never built to generate, and no documentation policy, agent coaching initiative, or knowledge base governance programme addresses that dependency, because none of them touch the point at which the gap originates.
The session occurred. The resolution was reached. The knowledge that would have made the next interaction faster, the diagnostic sequence that would have informed Now Assist, the resolution path that would have strengthened the knowledge base, all of it existed in the session and none of it reached ServiceNow. It does not feed the metrics, it does not improve the virtual agent, and it does not make the next technician handling the same issue any better positioned than the one who handled it this time.
ScreenMeet is not a layer added on top of your existing reporting stack. It is the infrastructure that makes the underlying data exist in the first place, and without that data, the seven KPIs on your dashboard will continue to be calculated figures rather than honest ones.
Ready to Replace Your Legacy Solutions?
Start Your Journey Here
Try The Guided Tour
See It In Action: Experience our comprehensive in-browser demo showcasing all core remote support capabilities and platform integrations.
Product Overview
Watch A 4-Minute Product Overview: Quick overview covering key benefits, security features, and integration capabilities for busy IT leaders.
Talk To A Specialist
Ready To Get Started? Speak with our platform experts about your specific ServiceNow, Salesforce, or Tanium integration requirements.
Book A Demo