How I led my team in evolving our $1.3B platform—augmenting our out-of-the-box AI capabilities by providing the trusted foundation for customers to customize and extend the platform with their own agentic solutions.
"In times that feel like a relentless tempest, my job as a leader hasn't been to find calmer seas but to help my team build better ships."
2023: having built a $1.3B ARR platform used by 90% of the Fortune 500, we faced a strategic fork in the road with GenAI.
Collaboration: Learned customers want AI as teammates, not replacements; demanded determinism for serious use.
Transparency: Established absolute transparency—the platform must explain every auto-generated snippet.
Trust: Realized "Trust is a feature." Built deep audit logs into SRE agent. This foundational principle underpins our future platform.
Transitioning from a "know-it-all" to a "learn-it-all" culture required leadership trade-offs and new operational cadences.
Shifted from semesterly, to quarterly, to weekly planning via impact-effort matrices. Freed PMs from daily revenue obsession to focus strictly on learning, retention, and ROI.
Mandated AI-summarized friction logs and started every offsite and all-hands with a customer narrative.
Deprecated commodity legacy services to free up engineering capacity, shifting "productive capital" away from maintaining the past to building frontier AI capabilities.
Broke down silos by empowering small, cross-functional squads. Like the agents they built, they had a clear mission and guardrails, but full agency on how to execute.
Customers realized the ROI of our out-of-the-box agents and wanted to extend them. We evolved our architecture to allow deep customization, becoming the definitive "app server for AI."
Agentic process automation produces immediate ROI. Built the Agent Loop into Logic Apps to enable 120K+ enterprises to run agents at infinite production scale as part of their processes
AI governance is the unlock for agentic adoption at scale. First to market with AI Gateway and Foundry Control Plane enabling Agent Identity, Visibility and Control.
We risked our 6-year Gartner lead and $600M API Mgmt business, pivoting its 80K-customer infrastructure to be first to market with the AI Gateway and Foundry Control Plane.
Agents require durable memory, semantic caching, and fast vector retrieval to function effectively in real-world scenarios.
Drawing from capabilities like Knowledge IQ in Microsoft Foundry, we focused on intelligently indexing and measuring retrieval quality so agents remain securely grounded in enterprise context.
Deprecating commodity services, we boldly redirected our $400M ARR Azure Managed Redis business to focus entirely on agentic caching/vector needs, partnering with Redis Inc. to move faster.
To tame tool sprawl and reduce hallucinations, we transformed 1400+ connectors into a private catalog of MCP servers with dynamic composition and secure access controls.
Ecosystem Contribution: We didn't just use MCP; we engaged the community and drove the addition of Auth protocols to the standard.
Evolving the differentiated ecosystem program I created for Azure, we launched the Foundry-native partner services program.
We deeply integrated E2E vertical partners (Kore.ai), MLOps leaders (Weights & Biases, Arize, Mistral) and core infrastructure (Vast Data) directly into the platform fabric.
Compute sandboxes for agent execution created a counter-intuitive massive flywheel, driving aggregate platform usage via self-optimizing RALPH loops.
Resolving the tension between probabilistic model choice and deterministic policy. Embedding tools within Gateway constraints for enterprise safety.
Beyond chat: Agents returning render instructions via MCP to inject UI directly into conversation flows. Adopting OpenAI/MCP App standards.
Devs no longer start from scratch; they extend out-of-the-box agents with custom intent. We ship knowledge and skills that customers can seamlessly adapt to their workflows.
Transitioning evals from an internal mandate to a first-class customer product allows users to establish their own guardrails and define intent explicitly.
By authoring evals, customers explicitly define what they want their extended agents to do and establish custom guardrails.
We leverage LLMs to critique LLMs. Entire companies are built on making evals easy. We bake this into DevOps and production.