VizCare

VizCare

VizCare

A shared operational platform that aligns data, decisions, and workflows across teams

A shared operational platform that aligns data, decisions, and workflows across teams

A shared operational platform that aligns data, decisions, and workflows across teams

One system to bring order to health insurance ops

One system to bring order to health insurance ops

One system to bring order to health insurance ops

Introduction

One reliable system for health insurance operations

One reliable system for health insurance operations

One reliable system for health insurance operations



Health insurance operation teams were making high-impact operational decisions across enrollment, commissions, and support workflows using the same data, but through disconnected systems.


When records fell out of sync, teams reached different conclusions, slowing down operations and increasing downstream risk.



Health insurance operation teams were making high-impact operational decisions across enrollment, commissions, and support workflows using the same data, but through disconnected systems.


When records fell out of sync, teams reached different conclusions, slowing down operations and increasing downstream risk.



Health insurance operation teams were making high-impact operational decisions across enrollment, commissions, and support workflows using the same data, but through disconnected systems.


When records fell out of sync, teams reached different conclusions, slowing down operations and increasing downstream risk.

As a lead designer, I focused on designing a centralized operational dashboard that showed real-time inconsistencies and established a shared foundation for consistent decision-making across teams.

Timeline

Timeline

2023 June - 2023 Dec

2023 June - 2023 Dec

Team

Team

Me (Lead Product Designer)

Me (Lead Product Designer)

Sanaa Ayesha ( Junior Product Designer)

Sanaa Ayesha ( Junior Product Designer)

Nishant Shrivastava (Product Manager)

Nishant Shrivastava (Product Manager)

Aswhin Kumaar (Business Analyst)

Aswhin Kumaar (Business Analyst)

Sana Ru (Lead Frontend Developer)

Sana Ru (Lead Frontend Developer)

Shardha Sapra (Senior Frontend Developer)

Shardha Sapra (Senior Frontend Developer)

Deliverables

Deliverables

Workflow Mapping | Research

Workflow Mapping | Research

System Integration Design

System Integration Design

QA | Validation

QA | Validation

Design System

Design System

End-to-End Product Delivery

End-to-End Product Delivery

Background

US health insurance still relies on a patchwork of outdated tools

Insurance operations were built across a disconnected system for benefits, commissions, enrollment, and claims. Most of which were never designed to work together.

Because these tools were already deeply embedded in day-to-day operations, teams couldn't replace them. They had to keep operations running while reconciling mismatched data and working around system gaps.


Vizcare was created as a shared operational foundation, bringing visibility and consistency across existing systems without disrupting them.

Challenges

Disconnection, not task complexity, drove operational risk

The real challenge wasn't slow execution. It was the risk of bad decisions caused by disconnected systems. Teams had to cross-check information across tools that didn't always agree.


Small inconsistencies often go unnoticed until they cause downstream problems.

Split Systems

Benefits, commissions, and claims lived in separate tools, leading teams to interpret the same data differently

Manual Processes

Agents manually verified information between systems, increasing rework and the risk of error

No Unified View

There was no single place to identify what was out of sync, delaying detection and response

Goals

Creating a shared operational baseline that teams could trust

The internal tool didn't fail because it was missing features. They failed because teams had no consistent way to understand what changed, what was reliable, and what needed action.


I defined three system-level goals to reduce operational risk and ensure consistent decisions across teams.

01_ Show What Changed

Make updates and inconsistencies visible as they happen, so issues don't stay hidden

02_ Stay In Flow

Provide the correct information in context without forcing agents to pause and cross-check systems

03_ Work from One Record

Ensure everyone relies on the same source of truth to avoid conflicting decisions

System Snapshot

Understanding failures involved tracking how problems spread across systems

Failures rarely originate in a single spot. They spread because systems were updated at different times and followed different rules. I mapped the operational chain from start to finish, not to show off complexity, but to identify where teams lost visibility and why small mismatches became significant risks.

Operational System Workflows

Operational System Workflows

User Research

Where teams read the same data differently

I spoke with 10 internal operators across membership operations, claim processing, and customer support teams, each with different decision-making responsibilities for the same record (2-3 per role).

By reviewing real support calls and auditing sync logs with product and engineering, we traced how each role relied on different signals to decide whether a case was "moving forward."

User Type 1: Membership Ops

Confirm enrollment status

Membership ops relied on surface status to confirm enrollment, with no signal for what was still pending downstream.

"It shows active on my screen. Why wouldn't it go through?"

User Type 2: Claims Analysts

Track and resolve batch errors

Claims analysts required complete data to proceed, but had no visibility into whether missing fields were still syncing or permanently blocked.

"I never know who to ask when info is missing or changing."

User Type 3: Call Center Agents

Handle calls/chats from clinics and patients

Agents were left to choose which system to trust, often escalating not because rules were unclear, but because ownership was.

"I just tell them I'll escalate. There's nothing else I can do."

So, instead of fixing screens in isolation, I designed features around how decisions actually broke down.

Feature Prioritization

From repeated workarounds to intentional design choices

I grouped recurring decision breakdowns observed across teams and traced each one to a targeted design response.

Insights-to-Features Mapping

Insights-to-Features Mapping

Information Architecture

Aligning how teams interpret the same system

To address these breakdowns, I needed more than individual feature fixes. Teams were operating on the same systems, but with different assumptions about ownership, scope, and flow.


Before moving into detailed design, I worked closely with business analysts, engineers, and product managers to map how each platform actually supported decisions across teams.


This revealed overlaps, clarified boundaries, and established a shared structure the entire product could be built on.

A Shared Map System

A Shared Map System

With a shared system structure in place, I could finally create interfaces that promoted consistent decisions.

With a shared system structure in place, I could finally create interfaces that promoted consistent decisions.

Solutions

Translating research insights into three focused themes

I structured the final solutions into three themes.

Each theme turns research insights into concrete workflows that reduce guesswork across enrollment, sync, and support operations.

01_ Step-Based Enrollment Flow

Enrollment issues did stem from complex rules in some cases. But more often, breakdowns occurred because decisions were made before the data was fully synced.


I redesigned the flow to control when decisions could happen, separating life events, coverage, and dependents into clear stages so users weren't forced to guess or backtrack when the system lagged behind.

Enrollment Portal

Enrollment Portal

Enrollment Portal

Life events, coverage changes, and dependents are handled in separate stages to prevent incomplete or premature updates.

Outcome: Staged enrollment reduced re-entry, incomplete submissions, and downstream errors

Outcome: Staged enrollment reduced re-entry, incomplete submissions, and downstream errors

Inline tooltips show required documents only when they're relevant, reducing back-and-forth and preventing unnecessary rework.

02_ Centralized Sync Dashboard

Failures appeared in various parts of the tools, but there was no single way to trace what happened from start to finish. To address this, I created a centralized sync dashboard to make the entire transaction process visible from start to finish, allowing teams to track where data moved, where it got stuck, and why.

Transaction Dashboard

Transaction Dashboard

Transaction Dashboard

tracing failures to their origin

tracing failures to their origin

tracing failures to their origin

All sync alerts are combined into one view, with the initial failure clearly highlighted.

The team can begin from the point of breakdown and trace how the issue spread across systems, rather than cross-checking logs tool by tool.

*Detailed Views | Transaction Dashboard

*Detailed Views | Transaction Dashboard

*Detailed Views | Transaction Dashboard

Error Log Table

Error Log Table

Error Log Table

verifying what failed

verifying what failed

verifying what failed

I designed the error log to make failures explainable, not just visible. By tying each error to a specific record and handoff, agents could grasp why something failed and determine the next steps without defaulting to escalation.



Load from Exchange

Load from Exchange

Load from Exchange

spotting mismatches early

spotting mismatches early

spotting mismatches early

This view was designed to help teams judge whether data was moving or actually blocked. Showing exchange-level updates in context reduced guesswork and prevented premature fixes based on incomplete signals.



Flow & Routing Map

Flow & Routing Map

Flow & Routing Map

following issues across systems

following issues across systems

following issues across systems

I designed this map to understand where progress signals diverged across the system.

By tracking a single transaction through each handoff, I could identify where progress advanced, where it truly stalled, and how small mismatches escalate into real risks.

Outcome: Error tracing dropped from 30-40 minutes to under 16, but more importantly, agents stopped escalating issues they could confidently resolve

Outcome: Error tracing dropped from 30-40 minutes to under 16, but more importantly, agents stopped escalating issues they could confidently resolve

03_ Agent-Ready Membership & Commission Data

Membership and commission data technically existed across systems. But agents struggled to make decisions with confidence during live support because of the absence of a clearly owned, authoritative view that agents could rely on in the moment.


I designed consolidated data model that brought membership status, coverage, and commission records into a single, trusted source.

This allowed agents to verify information quickly, resolve disputes without second-guessing, and stay focused on supporting customers.

Part A:

Membership

Membership

Membership

a unified, trusted view of member status

a unified, trusted view of member status

a unified, trusted view of member status

I organized membership data into scannable sections so agents could confirm coverage and enrollment status without cross-checking other tools. This removed updates and gave agents confidence in what they saw.

Commission

Commission

Commission

transparent payouts with clear ownership

transparent payouts with clear ownership

transparent payouts with clear ownership

Commission data was structured by agent, date, and status to identify and resolve discrepancies without manual back-and-forth, reducing disputes and shortening resolution time.

Part B:

Contact Center

Contact Center

Contact Center

integrated view of agent tools

integrated view of agent tools

integrated view of agent tools

During live calls, agents frequently waste time looking for answers while trying to maintain the conversation.

I integrated AI directly into the contact center to provide relevant context and previous answers in real time. The focus was on enhancing the efficiency and accuracy of the live call workflow.

Outcome: Agents resolved issues with fewer escalations, and average handling time and disputes are dropped.

Outcome: Agents resolved issues with fewer escalations, and average handling time and disputes are dropped.

+Detailed Views

AI Contextual Suggestions

I designed contextual suggestions to appear when agents needed them, so common questions could be answered without disrupting conversation flow. This helped agents stay focused on the caller.

Duplicate Question Detection

When repeat questions came up, agents were shown prior answers so they could confidently answer without retracing past steps, reducing backtracking and improving consistency during calls.

Outcome

Faster decisions, fewer escalations with clear trust signals

With clear ownership, traceable sync states, and AI-assisted context, agents spent less time verifying data and more time making confident decisions and supporting customers.

Most workflows required little guidance, even when handling real, anonymized production cases.

62%

62%

Faster error resolution

Faster error resolution

48%

48%

Repeated data entry

Repeated data entry

1.3X

1.3X

AI-assisted feature adoption

AI-assisted feature adoption

4.7

4.7

Average clarity rating

Average clarity rating

What still needs work

Two gaps became clear during testing:

  • some users hesitated at the very first steps, unsure where to begin

  • AI handled common flows well, but felt generic in rare edge cases

Next Steps

If I were to take this further, I'd focus on three things for better reassurace:

Make starting points more explicit for first-time agents

Make starting points more explicit for first-time agents

Make starting points more explicit for first-time agents

Early tests showed that while experienced users moved quickly, new agents hesitated at the entry point. The next step would be adding lightweight guided starts based on role and task, rather than introducing a full onboarding flow.

Early tests showed that while experienced users moved quickly, new agents hesitated at the entry point. The next step would be adding lightweight guided starts based on role and task, rather than introducing a full onboarding flow.

Early tests showed that while experienced users moved quickly, new agents hesitated at the entry point. The next step would be adding lightweight guided starts based on role and task, rather than introducing a full onboarding flow.

Train AI suggestion on real escalation cases, no ideal flows

Train AI suggestion on real escalation cases, no ideal flows

Train AI suggestion on real escalation cases, no ideal flows

The AI handled common scenarios well but struggled with edge cases, leading to manual escalation. The next iteration would focus on feeding real escalation transcripts back into the system, so suggestions reflect how issues actually unfold under pressure.

The AI handled common scenarios well but struggled with edge cases, leading to manual escalation. The next iteration would focus on feeding real escalation transcripts back into the system, so suggestions reflect how issues actually unfold under pressure.

The AI handled common scenarios well but struggled with edge cases, leading to manual escalation. The next iteration would focus on feeding real escalation transcripts back into the system, so suggestions reflect how issues actually unfold under pressure.

Pressure-test trust under time constraints

Pressure-test trust under time constraints

Pressure-test trust under time constraints

Most validation happened in controlled environments. A next step would be the system during live support windows, where agents must decide quickly which data to trust to see where confidence still breaks down.

Most validation happened in controlled environments. A next step would be the system during live support windows, where agents must decide quickly which data to trust to see where confidence still breaks down.

Most validation happened in controlled environments. A next step would be the system during live support windows, where agents must decide quickly which data to trust to see where confidence still breaks down.