top of page

Unified UX: Designing a Single Experience for an AI Support Suite

Unifying AI Escalation, Case Quality, Agent Assistance, and Knowledge into One Coherent System

Overview

As AI capabilities expanded across the customer support lifecycle, multiple intelligent agents were introduced to solve distinct problems—real-time agent assistance, escalation prediction, case quality auditing, and knowledge discovery.

While each capability delivered value independently, the overall experience began to fragment. Users were interacting with multiple AI tools that behaved differently, required separate learning, and introduced operational and security overhead.

This case study focuses on how I led the design of a unified UX instance, bringing together:

  • AI Agent Partner (Agent Helper)

  • AI Escalation Manager

  • AI Case Quality Auditor

  • AI Knowledge Agent

into one coherent, scalable experience.

The Core Problem

The problem was not visual inconsistency—it was systemic fragmentation.

As multiple AI capabilities coexisted, the platform suffered from:

  • Different mental models for similar AI actions

  • Inconsistent workflows for agents, admins, and supervisors

  • Repeated learning effort as new AI capabilities were introduced

  • Increased cognitive load during live case handling

  • Inconsistent transparency into AI decisions, affecting trust

  • Longer onboarding and demo cycles

Beyond UX, this fragmentation also created business and operational risks:

  • Each AI capability required a separate Salesforce package upload

  • Salesforce security reviews are costly and time-consuming

  • Customers were hesitant to install multiple packages due to security concerns

  • Release velocity slowed due to repeated compliance checks

At scale, the platform risked becoming powerful but expensive, complex, and hard to trust.

My Role

I led both platform-level experience strategy and feature-level execution for the unified AI instance, operating as a Product Manager and UX Lead across multiple AI agents.

At any given time, I was responsible for 5 AI agents spanning 3 core product areas, which required balancing long-term system coherence with immediate feature delivery.

My responsibilities included:

  • Defining a shared experience vision across AI Agent Partner, Escalation Manager, Case Quality Auditor, and Knowledge Agent

  • Designing and executing a unified interaction model for AI behavior across live cases, post-resolution workflows, and admin configuration

  • Owning feature-level UX and product decisions, from problem framing to final delivery

  • Establishing experience principles to maintain consistency while shipping at speed

  • Collaborating deeply with product, engineering, security, QA, and GTM teams

  • Governing craft quality while actively designing, reviewing, and iterating on features

This work required me to constantly switch between:

  • Platform thinking (coherence, scalability, cost, security)

  • Feature execution (shipping real workflows, edge cases, and states)

In short, I wasn’t defining the system from a distance —
I was inside it, building and scaling it at the same time.

Reframing the problem

I reframed the challenge from:

“Each AI capability needs a better UX”

to:

“The AI suite must behave like one intelligent system, not four separate tools.”

This shift unlocked alignment across teams and allowed us to think in terms of shared behavior, not isolated features.

1.png
"The world is but a canvas to our imagination"

© 2024 of Disha Hari. All Rights Reserved

bottom of page