Home » AI Innovators: Tanvi on AI Strategy, MCP, and the Future of Autonomous Agents
AI Innovators-Tanvi-Motwani

AI Innovators: Tanvi on AI Strategy, MCP, and the Future of Autonomous Agents

In our AI Innovators series, we interview leaders shaping the future of artificial intelligence to hear their perspectives on innovation, adoption, and impact. 

This week, we spoke with Tanvi Motwani, a seasoned AI leader with over 13 years of experience applying AI at scale across companies like Amazon, LinkedIn, Robinhood, Snap, and Block. From designing search and recommendation systems to launching conversational agents, Tanvi has consistently focused on how AI can deliver measurable value to customers.

Q&A with Tanvi Motwani

To start, tell us a little about your background & how you got started in AI.

Tanvi: I’ve been applying AI for about 13 years now. My focus has always been on delivering impact to customers at scale. I started at Amazon, where I worked on query understanding and ranking for Amazon Search. Later at LinkedIn, I focused on recruiter search and recommendations, then at Robinhood on stock ticker search and financial news feeds, and at Snap where we worked on both search and the Gen AI chatbot. 

Most recently, I’ve been at Block, applying AI across products like Square, Cash App, Tidal, and Afterpay.

Tanvi Motwani, Director of AI

“AI has always been about real-world impact for me — not just the technology, but how it can change experiences for millions of people.”

— Tanvi Motwani, Director of AI

What do you like to do outside of work?

Tanvi: I have a toddler, and most of my time is spent with her — playing, teaching, and taking her to classes. Outside of that, I love reading investment reports, especially around companies adopting AI. I also enjoy swimming, solving Rubik’s Cubes, and playing Chess.

What first sparked your interest in AI?

Tanvi: Back in undergrad, I came across the book Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell. Concepts like the A* algorithm and NLP tree parsing fascinated me. Around the same time, Google was emerging as a search engine, and I loved comparing it to Ask Jeeves and Yahoo to see how each handled semantics and ranking. I was hooked.

MCP (Model Context Protocol) is becoming a new standard. What’s your perspective?

Tanvi: I see MCP as a major step forward in standardizing how LLMs interact with tools. At Snap, we had to build custom integrations to connect LLMs with Snap Places, Lenses, and video features. MCP essentially makes that work reusable across contexts; it creates a consistent way to handle permissions, manage prompts, and orchestrate workflows.

What excites me is that MCP doesn’t just simplify engineering; it makes LLM-powered systems more contextual and personalized. Once prompts, tools, and workflows are bundled in a structured way, everything “just works.” That shift from ad-hoc connectors to a standardized protocol is a turning point for the ecosystem.

Tanvi Motwani, Director of AI

“MCP makes LLMs more magical. Once the right prompts, tools, and workflows are bundled together, it just works.”

— Tanvi Motwani, Director of AI

Do you see MCP leveling the playing field between startups and large enterprises?

Tanvi: Yes. AI overall has leveled the playing field. Before, the big differentiator was computing power. If you had thousands of GPUs, you had the advantage. Today, with LLMs accessible via APIs and fine-tuning at lower cost, startups can compete with fewer resources. MCP accelerates this shift by giving everyone a standardized, open way to connect tools and workflows.

How do you define AI strategy at the companies you’ve worked at?

Tanvi: I always start with the customer problem. Too many companies begin with the technology — “we have an LLM, how can we use it?” Instead, you need to ask:

  • What’s the metric we want to move? (Cost reduction, revenue, growth, customer satisfaction)
  • What’s the right AI tool for that metric? (LLM, regression, boosted trees, etc.)
  • How do we scale the solution across customers?

That’s my three-step philosophy: start with the problem, then choose the technology, then execute and scale it.

What about the data architecture to support AI at scale?

Tanvi: For application companies, it’s about building ML platform teams who handle feature stores, training, inference, and MLOps. Most large orgs now also build an LLM platform layer so that every team doesn’t call OpenAI or Anthropic directly — everything goes through a governance layer. For foundational companies, the challenge is computing and GPUs. You need infrastructure that can handle training and inference at a global scale, often with partners, hyperscalers, and other companies like Coreweave.

There’s been talk about GPU supply and data center capacity not keeping up with demand. Do you see that as a critical issue?

Tanvi: Yes, especially as models get larger and more reasoning-intensive. Foundational LLMs with trillions of parameters require massive GPU capacity. Smaller specialized LMs (SLMs) can get by with fewer GPUs, but for advanced reasoning, compute is still the bottleneck.

What AI advancements excite you most in the next few years?

Tanvi: Three stand out:

  • Specialized Language Models (SLMs): Smaller, fine-tuned models for domain-specific needs.
  • Agentic AI: Autonomous agents that not only reason but also complete tasks with or without human-in-the-loop.
  • Personal AI Assistants: Truly context-aware assistants that know your documents, email, and work style, and can act on your behalf.

If I had to make one bet, it’s on autonomous agents. They’ll move beyond copilots into fully owned workflows — like a marketing agent that runs campaigns for you.

Tanvi Motwani, Director of AI

“My big bet is on autonomous agents. They’ll become real digital teammates that handle entire workflows.”

— Tanvi Motwani, Director of AI

What role do you see for synthetic data in enterprise AI?

Tanvi: Synthetic data is going to remain critical. Real-world data is golden, but it’s hard to use at scale without privacy risks. We’re seeing more approaches like using large open-source LLMs (Llama, Qwen) to generate training data for smaller models. Distillation techniques, as highlighted in the DeepSeek announcement, are also key.

How are large enterprises approaching AI adoption?

Tanvi: It’s a mix. On one side, you see developers experimenting bottom-up, calling OpenAI directly and proving out use cases. Some of those experiments make it into production. On the other side, you see caution — employees worry AI might replace jobs. Successful adoption requires cultural work:

  • Creating psychological safety so employees see AI as an augmentation, not a replacement
  • Clear governance around which tools are licensed and approved
  • Education to demystify AI and show its real benefits

What are your thoughts on workplace productivity tools powered by AI?

Tanvi: Tools like GoSearch and GoLinks are redefining productivity. Tools that make it easy to find information, augment and enhance workflows, and make you more productive are a huge asset to all companies, large or small. In development, I use Copilot-like tools such as Cursor and Claude Code, which are game-changing. 

But the key is context: AI must understand what you’re working on in real time. I always remind younger engineers to review AI-generated code — it’s an augmentation, not a replacement.

Let’s do a few rapid-fire questions. What’s the biggest myth you hear about AI?

Tanvi: That AI has feelings or emotions. It doesn’t. It’s very good at memorization and increasingly at reasoning, but emotions and consciousness are far away.

If you weren’t working in AI, what would you be doing?

Tanvi: Investing. I love analyzing earnings reports and studying how companies transform themselves with AI.

Where do you find your AI inspiration?

Tanvi: Mostly from research papers, blogs, and AI influencers. I follow people like Andrew Ng, Andrej Karpathy, and the Hugging Face founder Thomas Wolf. They’re great at making cutting-edge work accessible.

Closing Thoughts: GoSearch and the Future of Enterprise AI

Tanvi’s perspective reinforces a critical point: AI’s value comes from solving real problems and driving measurable outcomes. At GoSearch, we’ve built our enterprise AI search platform around those same principles. By unifying knowledge across tools and delivering fast, secure, and context-aware answers, GoSearch helps organizations unlock productivity and empower teams — not replace them.

Just as Tanvi highlighted, the most impactful AI is tailored to customer needs, seamlessly integrated into daily work, and focused on measurable results. That’s the future we’re building toward with GoSearch.

Sign up
Share this article
Jorge Zamora on Building the Future of Enterprise AI

AI Innovators: Jorge Zamora on Building the Future of Enterprise AI Search

Jorge Zamora shares insights on building the future of enterprise AI search — from internal tools to real-time, permission-aware AI.
AI Innovator Erik Schwartz on Enterprise Search and the Future of Work

AI Innovators: Erik Schwartz on Enterprise Search, Specialized AI Agents, and the Future of Work

AI expert Erik Schwartz shares how enterprise search and specialized AI agents are transforming productivity and shaping the future of work.
Box vector large Box vector medium Box vector small

AI search and agents to automate your workflow

AI search and agents to automate your workflow

Explore our AI productivity suite