Building the Future of Value-Based Care with Predictive AI: A Conversation with Pallav Sharda, Chief Product Officer of Carrum Health

January 22, 2026 by Aneri Patel

 Conference 2026

Pallav Sharda is the Chief Product Officer at Carrum Health, a value-based Centers of Excellence network transforming how specialty care is delivered and paid for in the United States. Carrum partners directly with top-quality providers and negotiates bundled, upfront rates to give self-insured employers and their members nationwide access to high-value surgical and cancer care. Through its employee benefit program, Carrum coordinates the entire patient journey and has no out-of-pocket cost for patients which eliminates financial barriers to accessing specialty care.

Sharda brings a rare blend of clinical training, technology expertise, and product leadership to Carrum. Previously a healthcare leader at Google Cloud and Google X (Alphabet’s Moonshot Factory), Sharda has also held product and analytics roles at GE Healthcare, Kaiser Permanente, and UnitedHealth Group.

Pallav Sharda, Chief Product Officer of Carrum Health

The Pulse: Before we dive into how AI is shaping Carrum, could you start by highlighting your main areas of responsibility as Chief Product Officer and describe how Carrum’s platform serves patients, providers, and employers?

Pallav Sharda: The Carrum platform team is a mix of technology and a human team, which is very unique for a digital health startup, because usually these are two distinct teams. On the technology side, we have engineers, data experts, and product managers. On the care side, we have care navigators and specialists who support patients through their journeys. Carrum operates in a two-sided marketplace, where we create the supply of high-quality providers on one side and then match it with the demand of employees who work at our clients. My role is to translate the underlying philosophy of value-based-care, which is a theoretical construct, into a practical, scalable platform that bridges the technical and human perspectives.

The Pulse: How does the Carrum platform embed AI today?

PS: Why don’t we start first by defining AI? It’s on everyone’s mind today, but AI is a 70 years old concept. Today, we talk about two major categories: predictive AI, which is traditional machine learning, and generative AI, which has brought the word AI to the forefront in the last three years.

On the predictive side, AI is already deeply embedded in our care and product experience. We use machine learning to power what we call automated precision engagement. Carrum has the ability to figure out who we can help with 95% accuracy. We know months in advance which employee might be trending towards a healthcare journey that Carrum may be able to assist with, allowing us to support them proactively. This is the demand-side prediction. We also use AI on the supply-side through our provider quality algorithms, which evaluate outcomes and appropriateness of care to identify the top 10% of providers nationwide for our members.

With generative AI, we take a human-first approach. These models aren’t yet ready for direct use in patient care, but we are using generative AI for internal efficiency.. Our technology team, for example, uses tools like GitHub Copilot to accelerate coding and development.

The Pulse: Can you describe how your AI models have evolved or improved over time as they’ve learned from actual patient outcomes?

PS: If I look back to when we first started our automated precision marketing journey and compare it to where we are now, the improvement is orders of magnitude. As more members complete a full care journey with us, we gain richer contextual information. For example, if we identify a member who could benefit from our knee replacement bundle and they ultimately receive that procedure through Carrum, we now have the full end-to-end data from their experience. That gives us the “ground truth” confirming whether our predictions were correct.

These healthcare journeys unfold over weeks or months, so as each rolling set of predictions completes the Carrum experience and we see the final outcomes, the models continuously improve. It truly behaves like compounding interest. The more knowledge the model accumulates, the stronger it gets. At this point, as I mentioned, we can predict with about 95% accuracy where someone’s healthcare trajectory is headed for the bundles Carrum supports.

The Pulse: How do you approach creating meaningful guardrails for AI usage within the organization?

PS: For AI, whether predictive or generative, it never makes any benefit or coverage determinations. We always maintain a privacy-first mandate: we never send patient information to public models, and no PHI is ever shared. All AI use occurs within HIPAA-compliant enterprise instances. This reflects our view of AI as an augmentation tool, not a replacement for human judgment or human connection. And while we stay aggressive on the innovation curve, we remain very conservative about risk and will absolutely never gamble with patient trust.

The Pulse: Have you seen your customers quick to adopt AI or is there hesitancy, even with the guardrails in place?

PS: Let me narrow your question to generative AI, because most of the trust concerns have emerged only since GenAI entered the scene. Every organization is on its own adoption journey, so I don’t want to speculate about where all of our customers sit on that curve. But one universal truth in healthcare is that all stakeholders must be able to trust one another. No one hands off patient care unless they’re confident the partner can deliver at least the same quality. The BAAs we sign are one example of that. To maintain trust with our customers, we take a very deliberate approach with generative AI and keep it entirely internal-facing.

For predictive AI or traditional machine learning, our employers actually appreciate that we use it. It functions like a highly precise search engine that helps solve the engagement problem. If employees constantly received generic emails about benefits, they’d tune them out. But when a helpful message arrives exactly when someone is approaching a relevant healthcare decision, it benefits both the employer and the employee. That’s possible because machine learning can identify patterns and timing well.

Instead of relying solely on open enrollment, employers encourage us to reach out to employees only when it’s contextually relevant. For example, if someone is experiencing knee pain and their PCP is about to recommend surgery, that’s the perfect moment to let them know their employer offers a no-cost benefit for that procedure. This is the value of truly knowing your customer: you can help them at the exact moment they need it.

The Pulse: As an industry thought leader and someone that was previously in the medical field, how do you think through which parts of the care journey need a human touch and which can benefit from generative AI?

PS: Coming from a clinical background, I always start with the belief that the delivery of care is deeply human. We’re never replacing that. But AI, at least with how we know it today, is fully capable of handling “bottom of the license” work. That’s true across industries, but especially in healthcare: scheduling, data entry, paperwork, anything that pulls attention away from patients.

I’ve spent much of my career building tools for clinicians, and any workflow involving a PDF, a fax, or a phone call probably has 50% of its effort that could be lifted by an out-of-the-box generative AI solution. The idea is to let humans do the human work and augment everything else.

So for example, when our care specialists are on the phone with a patient, they’re no longer typing notes or transcribing the call afterward. AI now handles all of that, from recording the conversation, generating transcripts, and creating tailored summaries for the specialist, their manager, or the team lead. That frees our staff to focus entirely on the person on the line.

The benefits compound over time. When a specialist speaks with the same patient again, AI can surface a quick two-line summary, and you can actually talk to your own notes. You can understand exactly what has occurred with a patient in the past. It actually helps humans be superhuman.

Empathy is definitely very human to deliver and receive. But if you’re a care navigator doing this 25 times in a day, and switching context to connect with yet another person, then it’s incredibly hard. Maintaining and recalling that context is where AI can serve as a kind of copilot for bottom-of-the-license work, almost like an intern sitting next to you, always taking notes and reminding you of what just happened.

The Pulse: How do you see AI being used in the value-based care space in the next 3-5 years?

PS: What’s exciting is that, as a value-based care company, we can eventually create an almost infinite menu of value-based bundles. Today, we focus on large categories like orthopedics, joint replacement, dialysis, or cancer care. These are big umbrellas. But if we use AI to parse data and make precise predictions, we can go much deeper.

It becomes like Instagram shopping: endless verticals. You could imagine a future where we design a dialysis bundle specifically for employees under 35 who speak Spanish. That’s a tightly defined cohort with its own pricing, demand signals, and curated supply—say, Spanish-speaking nephrologists in specific regions. Instead of a telescope, we’d have a microscope to create highly targeted bundles.

The opportunity is two-dimensional because AI also improves the unit economics of each bundle. That same Spanish-speaking dialysis patient might interact with a Spanish-speaking AI agent for routine scheduling, reducing the need for manual coordination, especially when they’re traveling or switching locations.

The compounding operational leverage we’re already seeing in finance, legal, and other parts of the company could apply to every one of these micro-bundles. Marketing, outreach, and coordination could all become highly personalized and efficient, allowing us to serve millions of people without blinking an eye.

The Pulse: What do you think is the biggest gap to getting to this future? What needs to change? Is the amount of data that needs to be collected? Is it reframing an entire organization around a new way of managing patient interactions?

PS: Quite honestly, we’re in a world with too many solutions chasing too few real problems. There are countless ways we could change things, but change management is the real hurdle. I feel it myself. I still catch myself manually searching Slack or rereading documents instead of asking an AI tool to do it. If I struggle with that, imagine someone whose job is to pick up the phone, write notes, manage referrals, or check labs. How do they start working differently?Every employee, whether in digital health, health systems, or health plans, now has to rethink how they work, and not everyone will adapt quickly. That will be the biggest barrier.

It’s similar to the cloud wave. I was at Google during and after its peak adoption phase. Cloud was a generalized technology, but organizations still took a decade to embrace it. AI will follow a similar path: the technology is ready, but people and organizations will be slow to change.

The Pulse: Now, for some rapid fire questions to end the conversation! What’s one book that you recommend that’s influenced the way that you work, or you think?

PS: The Alignment Problem, by Brian Christian. He wrote it in 2020 when none of the GenAI wave had crested, but it blew my mind of how big an issue alignment is in underlying LLMs. That’s why we’re very conservative.

The Pulse: Do you have a favorite quote or motto that you like to live by?

PS: Be less wrong. I heard it somewhere, and I thought it was amazing.

The Pulse: Last question, if you could have dinner with any innovator in tech or healthcare, who would it be? And why?

PS: Yeah, I can think of one person in tech and one in healthcare. On the healthcare side, I’d choose Regina Herzlinger. She was the first tenured female professor at Harvard Business School and really pioneered consumer-driven healthcare, ideas that were early precursors to value-based care. On the tech side, I’m really interested in the emerging field of LLM interpretability, figuring out why models think what they think. There’s a researcher at Anthropic, Chris Olah, whose work treats models almost like neuroscience organisms. I’d love to talk to him about that.

Register   Menu