Interview with Vin Mitty, PhD, Sr. Director of Data Science and AI, LegalShield
This interview is with Vin Mitty, PhD, Sr. Director of Data Science and AI, LegalShield.
To start, how do you introduce yourself and the problems you like to solve at the intersection of data, AI, and product?
I started out as an entrepreneur, building websites and ERP systems for small businesses and non-profits in Bangalore, India, in 2006. It was during the middle of a tech hype cycle, when small businesses in India knew they wanted to be part of the web but didn’t know how. I believe we’re seeing a similar pattern with AI today: every company and small business wants an AI strategy or product, even before defining a clear outcome.
So, the problems I solve are the unsexy ones. In my research and in my role as a Data/AI leader, I focus on AI adoption, clarity, and building trust.
I keep returning to two core questions:
- What problem are we solving, and what technology is right for it?
- And then, is it working?
If we can answer these two questions well, we’re on the right track.
What single inflection point most shaped your path from data/analytics into AI leadership?
I think within companies, the transition from data/analytics to AI is a natural progression rather than a single inflection point. Without a solid foundation in data, many AI efforts might fall short.
I've seen the progression from analytics to AI/ML happen in some of my previous roles, and it's been the same here at LegalShield. When I joined, we were running reports off Excel sheets. So the first step was getting analysts out of report-running mode, saving time, and getting them to think about patterns in the data and their business impact.
Then we built the plumbing: a real data engineering setup pulling in customer touchpoints across internal systems and about two dozen external platforms like GA4, Google Ads, email tools, customer care, surveys, etc.
Once we could trust the data and understand the historical story, predictive (AI/ML) was the next logical step: robust forecasting, churn, lifetime value, and such.
Then ChatGPT hit, and things shifted culturally. Suddenly, teams were coming to us with ideas instead of us trying to convince everyone that models could help.
The biggest inflection point or epiphany I’ve carried through my career came when I ran my startup back in India. I realized that the most important thing you can do is create value—whether that’s a lot of value for a small group of people or a small amount of value for a large group (do both and you’re a billionaire!). That’s still my filter for everything—whether I’m running a business, consulting, or being a corporate leader.
Grounded in your human-first AI philosophy, what principle do you rely on when deciding where AI should augment people versus step aside?
I try to balance what each side is genuinely good at. AI is great at speed and at making sense of massive amounts of information. Humans bring context, judgment, empathy, and trust.
So, let AI handle tasks like triage, search, summarization, first drafts, and pulling context together. People should remain responsible for judgment calls, trade-offs, or anything that affects a customer’s outcome.
In my mind, we elevate everyone’s work with AI and do not replace them.
Building on that, what is one project where LLMs turned unstructured customer signals into a concrete change that moved a metric?
We used LLMs to turn customer feedback into something the business could actually act on. We ran customer comments and support signals through an LLM, and a clear theme emerged: people weren’t leaving because the product wasn’t valuable; they were leaving because they didn’t understand the full breadth of what they were paying for. It was an educational gap.
So, we expanded and personalized our onboarding emails to teach members what they could do with the product, earlier and more clearly. That change improved retention by about 4%.
Staying with foundations, what is one data engineering decision that made your AI system trustworthy in production?
Data engineering is basically making sure the data is available in the right place at the right time, and ensuring it is accurate and in a usable format.
We learned pretty quickly that trust in AI systems isn’t about how fancy the model is; it’s about how often it’s right. We built internal chatbots that answer data questions, and at first, the experience was hit-or-miss because LLMs don’t have context on our data and are guessing. The key data engineering decision was to structure our data the way the model needs it, not the way we were used to storing it.
That pushed accuracy from about 70% to 95%, and that’s when people actually started trusting it.
On measurement, what early indicator do you watch that regularly beats lagging metrics for guiding growth or finance calls?
For the growth and finance calls, I focus on the customer's activities in the first 2-3 weeks.
It comes down to one question: Did the new member do something that shows they understood? We usually see this in their logins and service requests per user. When these numbers go up, retention and revenue usually improve too.
Shifting to learning, what is one 30-day pilot you recommend to educators or L&D teams to improve outcomes without outsourcing thinking to AI?
30-day education pilot: Show the Receipts
Choose one unit to focus on.
Have students write their first draft without using AI.
After the first draft, students can use AI for help, but they need to explain how it helped. All feedback should describe what was changed and why. Each week, spend 10 minutes reviewing a few student examples with the class. Remind students that saying "The AI said so" is not an acceptable explanation.
Focus on whether students are applying what they learn, not just finishing the work. Check for fewer repeated mistakes and clearer explanations in their next assignment.
In LegalTech or regulated settings, what is your playbook move that gets legal, security, and product to say yes to an AI deployment?
Legal teams often push back on AI projects, and for good reason. AI can make mistakes, which naturally makes lawyers cautious.
We build trust by making AI less mysterious. Instead of treating it like magic, we share our sources and explain our reasoning. We involve people at every step and clearly state when the tool should not be used. When legal teams see that we plan for mistakes instead of ignoring them, the discussion shifts.
Looking ahead, what hiring or team practice helps you separate real AI practitioners from tool tourists while keeping the process fair?
AI has changed hiring in a way that is both helpful and somewhat risky: it makes the front end faster, but it can also filter out strong candidates for the wrong reasons.
Thanks for sharing your knowledge and expertise. Is there anything else you'd like to add?
Every decade seems to offer a new promise of certainty. We saw it with the web, big data, the cloud, and now with AI.
The pattern keeps repeating. This time, the potential benefits are real, but they come with conditions. Recent research connected to MIT on GenAI pilots shows that most projects do not achieve measurable results. The main challenges are integration, trust, and discipline, rather than the model itself.
