In an industry consumed by the race for artificial intelligence, companies are scrambling to avoid being left behind. The fear of missing out, however, leads many to chase flashy trends while ignoring the fundamentals, a practice one industry veteran calls “insane.”
Stanislav Petrov, a senior data scientist at Capital.com with over a decade of experience, argues that the key to success isn’t adopting the newest, most hyped model, but fostering a culture of “data sanity.” For businesses, this means prioritizing clear goals and quality data over the allure of the AI bandwagon.
The challenge is significant. According to a 2024 report from Alation, a staggering 87 percent of employees cite data quality issues as a primary reason their organizations fail to meet data and analytics objectives.
“The central paradox of the current AI boom is our obsession with results while neglecting the source” Petrov told me. “Despite all the excitement, the most critical factor for successful AI or data science projects remains the quality and relevance of input data — but since that’s the unsexy and hardest part, it often gets ignored.”
To counter this, Petrov champions a simple but rigorous framework before any project begins: What is the business goal? What is the potential business impact? And do we have the data to make it happen?
“If those aren’t clearly answered, we don’t move forward,” he said.
A Blueprint for Impact
Petrov points to developing a Customer Lifetime Value, or CLV, model as a prime example of this philosophy in action. The goal isn’t just to build a predictive algorithm, but to solve a core business need: optimizing marketing budgets.
“The goal is to understand the projected performance of the campaign or ad creative in the early stages and adjust spending accordingly,” Petrov said. He noted that such models are crucial for modern automated strategies, like value-based bidding on platforms like Google Ads, which rely on predicted user value.
While the current AI era is dominated by complex neural networks, Petrov said that for structured data problems like CLV, established methods often work best.
“A gradient boosting approach works well for structured data,” Petrov said, adding the crucial caveat: “Of course, you need to know what you’re doing and understand how to tune hyperparameters and choose a proper loss function based on your target distribution.”
The final, and perhaps most important, step is ensuring the model is actually used. He emphasized that impact measurement often requires sophisticated techniques like causal impact analysis to prove a model’s value when simple A/B tests are not feasible.
“The value of such a model lies not only in its technical merit but in its ability to influence real decisions, scale across systems, and adapt to business needs,” Petrov said.
Navigating an Evolving Landscape
This pragmatic approach is essential as data scientists face mounting headwinds, particularly from new privacy regulations and the long-heralded death of the third-party cookie.
Google is proceeding with its plan to phase out third-party cookies for all Chrome users, a move that a study from Lotame found 62 percent of marketers believe will negatively impact their advertising.
Petrov sees this not as a single event, but as a long, incremental process that requires adaptation.
“One major shift is leaning into incrementality-based frameworks and media mix modeling to understand the true contribution of channels, especially when direct path attribution breaks down,” Petrov said.
This forces tighter integration between data and marketing teams, relying more on techniques like geo-testing and probabilistic modeling.
The Mindset That Defines a Leader
When it comes to building teams capable of navigating these challenges, Petrov believes the differentiator between a junior and senior data scientist isn’t a single skill, but a mindset centered on ownership.
“While juniors may excel at executing well-scoped tasks, seniors are those who proactively define problems, engage stakeholders, and see solutions through to delivery and iteration,” he said.
He described a crucial realization for any professional looking to grow: “A key mindset shift is realizing that ‘nobody will come’ to tell you what to do or what is right. You must take initiative, make your own decisions, and take full responsibility for the outcomes.”
For Petrov, this lesson came with experience.
“Writing code isn’t the hardest part of the job, even if it’s not always easy,” he said. “The real challenge is integrating that work into a product, aligning it with business needs, and convincing stakeholders of its value.”
This philosophy of pragmatic ownership extends to building the infrastructure that supports models after they launch, a field known as MLOps. Rather than building overly complex, “full-featured platforms,” Petrov advocates for right-sized solutions.
“A robust system doesn’t always mean the most complex one,” he said. “In many cases, simple, well-scoped logging and alerting tied to key model outputs and drift indicators can cover 80 percent of what’s needed.”
By focusing on real problems, quality data, and tangible impact, Petrov’s message is clear: in the age of AI, a dose of sanity might be the most valuable algorithm of all.