The most striking finding in Deloitte Digital's 2025 Customer Service Excellence research is also the most uncomfortable one. AI adoption in contact centers grew 15% between 2023 and 2025. Over the same period, average customer-experience and employee-experience scores fell by 0.5 points (Deloitte Digital, Customer Service Excellence 2025).

Most B2B organizations bolted AI onto a CX function still measured against the same SLA dashboard from 2018, response time, time to first reply, tickets closed per agent, CSAT scored after a five-second survey nobody reads, and got exactly the result that operating model produces. The tooling shifted. The playbook didn't.
This is the playbook.
The legacy B2B CX function was designed around interaction throughput. The unit of work was the ticket. The unit of success was a closed ticket inside a service-level commitment. Everything in the organization, staffing models, queue routing, tier structures, manager dashboards, performance reviews, optimized for that.
It was a defensible design when headcount was the only lever for capacity. If volume doubled, you doubled the team. If volume halved, you cut. CSAT was a guardrail to make sure the throughput model didn't visibly damage the customer relationship. The function reported into operations or G&A because, structurally, it was an operations function.
The old model isn't wrong about what support has to do. It's wrong about what support is for.
The new model treats the CX function as a revenue contributor, not an interaction processor. The unit of work is no longer the ticket. The unit of work is the customer outcome, issue resolved, retention preserved, expansion identified, onboarding completed against a milestone.

Three categories of metric move to the center.
Outcome metrics measure whether problems actually got solved, not whether tickets closed. End-to-end resolution rate, 48-hour re-contact rate, and CSAT scored on resolved interactions specifically, not on touched ones, replace ticket throughput as the floor.
Revenue metrics measure CX's contribution to retention and expansion. Gross revenue retention by account segment. Expansion signal-to-conversation conversion. Save rate on at-risk accounts that flagged through support before they flagged through usage data. These are the numbers a CX leader brings to the QBR and a CFO can reconcile against ARR.
Operating metrics, cost per outcome, AI outcome rate by category, agent productivity, stay, but they move from headline KPIs to operating-floor diagnostics. They tell you whether the engine is running. They don't tell you whether the engine is producing value.
The unit-economics shift is the part most leaders intuit but few have actually re-modeled. In the old model, support cost was a near-linear function of volume. The marginal ticket cost the same as the average ticket because the marginal resolution required marginal headcount.
In the new model, support cost decouples from volume on the routine layer. The marginal automated resolution costs a small fraction of a human resolution, and the cost curve flattens with volume rather than scaling with it. The freed human capacity isn't a savings line, it's an input to a different P&L.

IBM Institute for Business Value's 2025 research on mature AI adopters in customer service found that organizations operating at scale, not just deploying tools, but running them inside redesigned workflows, reported 17% higher customer satisfaction and 15% higher human-agent satisfaction than peers (IBM Institute for Business Value, AI-powered productivity: Customer service).
For a B2B finance team, the model becomes additive. Cost-to-serve compresses on the routine layer. Revenue retained and revenue expanded show up as new line items attributable to the CX function. The 8% of ARR that historically went to the support and success line stops being a pure cost and starts producing measurable revenue offset.
Three role categories show up in CX functions that have actually made this transition.
A single owner for the AI agent itself: coverage by intent, training and feedback loops, escalation logic, the resolution-vs-deflection metric stack, integration health with the systems of record the AI takes action against. This role didn't exist in the old model. In the new model it is structural.
Call it Strategic Account Specialist, CX Engineer, Senior Customer Advocate, the title varies. These are the senior reps who used to be stuck on tier-1. In the new model they own retention conversations on at-risk accounts, structured onboarding for new customers in the first 60–90 days, and the QBR cadence on key relationships. Their KPIs are gross retention, time-to-first-value, and expansion-conversation conversion.
The role that owns the signal handoff between support and the revenue org. Usage drops, integration pulls, contract-clause questions, and pricing inquiries are leading indicators sales teams almost never see in real time. A defined liaison turns those into pipeline events on a measurable cycle.
The KPI redesign that goes with these roles is the part that actually requires leadership authority. Most CX functions can't unilaterally retire response-time as a headline KPI, it's baked into customer contracts, vendor scorecards, and historical comp plans.
The CFO has to sign off. The CRO has to agree to share expansion attribution. The CEO has to be willing to read a CX scorecard with revenue numbers on it.
Bain's research on B2B customer relationships makes a finding most CX teams are afraid to confront directly: customers don't want easy experiences as much as they want relationships, and the easier-experience optimization most digital CX programs run is uncorrelated with the relationship-quality drivers that actually predict retention and expansion (Bain & Company, Customers Want Relationships, Not Just Easy Experiences).
The implication is direct. A CX function running an AI tool inside an SLA culture optimizes for ease without producing relationships. It produces the Deloitte outcome, adoption up, satisfaction down. The fix isn't a different tool. It's a different operating model, and operating-model changes don't happen at the manager level.
This is why the playbook is leadership work. As one Accenture executive framed the broader pattern in late 2025: "The data and AI strategy is not a separate strategy, it is the business strategy" (Fortune, December 2025). For B2B specifically, the equivalent is that the CX strategy is not a separate strategy from the revenue strategy. It is one of the inputs.
Old model: ticket queues, SLAs, headcount-as-capacity, CX reporting into operations, success measured by interactions handled.
New model: outcomes, resolution rate, AI as the routine-layer infrastructure, humans on the relationship-driving work, CX reporting alongside revenue, success measured by retention preserved and expansion enabled.
The technology to run the new model is widely available. The companies winning with it are the ones whose leadership rebuilt the playbook around it. The companies frustrated by their AI rollouts are the ones who didn't.
The new model, in your helpdesk
Helply was built around the new operating model, outcome metrics on the dashboard, AI as routine-layer infrastructure, signal handoffs to CS and sales, and revenue intelligence reported alongside cost-to-serve. See the full operating shift →
Sources