Home » “I think of analysts as data wizards who help their product teams solve problems”

“I think of analysts as data wizards who help their product teams solve problems”

In the Author Spotlight series, TDS Editors chat with members of our community about their career path in data science and AI, their writing, and their sources of inspiration. Today, we’re thrilled to share our conversation with Mariya Mansurova.

Mariya’s story is one of perpetual learning. Starting with a strong foundation in software engineering, mathematics, and physics, she’s spent more thanover 12 years building expertise in product analytics across industries, from search engines and analytics platforms to fintech. Her unique path, including hands-on experience as a product manager, has given her a 360-degree view of how analytical teams can help businesses make the right decisions.

Now serving as a Product Analytics Manager, she draws energy from discovering fresh insights and innovative approaches. Each of her articles on Towards Data Science reflects her latest “aha!” moment: a testament to her belief that curiosity drives real progress.


You’ve written extensively about agentic AI and frameworks like smolagents and LangGraph. What excites you most about this emerging space?

I first started exploring generative AI largely out of curiosity and, admittedly, a bit of FOMO. Everyone around me seemed to be using LLMs or at least talking about them. So I carved out time to get hands-on, starting with the very basics like prompting techniques and LLM APIs. And the deeper I went, the more excited I became.

What fascinates me the most is how agentic systems are shaping the way we live and work. I believe that this influence will only continue to grow over time. That’s why I use every chance to use agentic tools like Copilot or Claude Desktop or build my own agents using technologies like smolagents, LangGraph or CrewAI.

The most impactful use case of Agentic AI for me has been coding. It’s genuinely impressive how tools like GitHub Copilot can improve the speed and the quality of your work. While recent research from METR has questioned whether the efficiency gains are truly that substantial, I definitely notice a difference in my day-to-day work. It’s especially helpful with repetitive tasks (like pivoting tables in SQL) or when working with unfamiliar technologies (like building a web app in TypeScript). Overall, I’d estimate about a 20% increase in speed. But this boost isn’t just about productivity; it’s a paradigm shift that also expands what feels possible. I believe that as agentic tools continue to evolve, we will see a growing efficiency gap between individuals and companies that have learned how to leverage these technologies and those that haven’t.

When it comes to analytics, I’m especially excited about automatic reporting agents. Imagine an AI that can pull the right data, create visualisations, perform root cause analysis where needed, note open questions and even create the first draft of the presentation. That would be just magical. I’ve built a prototype that generates such KPI narratives. And even though there’s a significant gap between the prototype and a production solution that works reliably, I believe we will get there. 

You’ve written three articles under the “Practical Computer Simulations for Product Analysts” series. What inspired that series, and how do you think simulation can reshape product analytics?

Simulation is a vastly underutilised tool in product analytics. I wrote this series to show people how powerful and accessible the simulations can be. In my day-to-day work, I keep encountering what-if questions like “How many operational agents will we need if we add this KYC control?” or “What’s the likely impact of launching this feature in a new market?”. You can simulate any system, no matter how complex. So, simulations gave me a way to answer those questions quantitatively and fairly accurately, even when hard data wasn’t yet available. So I’m hoping more analysts will start using this approach.

Simulations also shine when working with uncertainty and distributions. Personally, I prefer bootstrap methods to memorising a long list of statistical formulas and significance criteria. Simulating the process often feels more intuitive, and it’s less error-prone in practice.

Finally, I find it fascinating how technologies have changed the way we do things. With today’s computing power, where any laptop can run thousands of simulations in minutes or even seconds, we can easily solve problems that would have been challenging just thirty years ago. That’s a game-changer for analysts.

Several of your posts focus on transitioning LLM applications from prototype to production. What common pitfalls do you see teams make during that phase?

Through practice, I’ve discovered there’s a significant gap between LLM prototypes and production solutions that many teams underestimate. The most common pitfall is treating prototypes as if they’re already production-ready.

The prototype phase can be deceptively smooth. You can build something functional in an hour or two, test it on a handful of examples, and feel like you’ve cracked the problem. Prototypes are great tools to prove feasibility and get your team excited about the opportunities. But here’s where teams often stumble: these early versions provide no guarantees around consistency, quality, or safety when facing diverse, real-world scenarios.

What I’ve learned is that successful production deployment starts with rigorous evaluation. Before scaling anything, you need clear definitions of what “good performance” looks like in terms of accuracy, tone of voice, speed and any other criteria specific to your use case. Then you need to track these metrics continuously as you iterate, ensuring you’re actually improving rather than just changing things.

Think of it like software testing: you wouldn’t ship code without proper testing, and LLM applications require the same systematic approach. This becomes especially crucial in regulated environments like fintech or healthcare, where you need to demonstrate reliability not just to your internal team but to compliance stakeholders as well.

In these regulated spaces, you’ll need comprehensive monitoring, human-in-the-loop review processes, and audit trails that can withstand scrutiny. The infrastructure required to support all of this often takes far more development time than building the original MVP. That’s something that consistently surprises teams who focus primarily on the core functionality.

Your articles sometimes blend engineering principles with data science/analytics best practices, such as your “Top 10 engineering lessons every data analyst should know.” Do you think the line between data and engineering is blurring?

The role of a data analyst or a data scientist today often requires a mix of skills from multiple disciplines. 

  • We write code, so we share common ground with software engineers.
  • We help product teams think through strategy and make decisions, so product management skills are useful. 
  • We draw on statistics and data science to build rigorous and comprehensive analyses.
  • And to make our narratives compelling and actually influence decisions, we need to master the art of communication and visualisation.

Personally, I was lucky to gain diverse programming skills early on, back at school and university. This background helped me tremendously in analytics: it increased my efficiency, helped me collaborate better with engineers and taught me how to build scalable and reliable solutions. 

I strongly encourage analysts to adopt software engineering best practices. Things like version control systems, testing and code review help analytical teams to develop more reliable processes and deliver higher-quality results. I don’t think the line between data and engineering is disappearing entirely, but I do believe that analysts who embrace an engineering mindset will be far more effective in modern data teams.

You’ve explored both causal inference and cutting-edge LLM tuning techniques. Do you see these as part of a shared toolkit or separate mindsets?

That’s actually a great question. I am a strong believer that all these tools (from statistical methods to modern ML techniques) belong in a single toolkit.  As Robert Heinlein famously said, “Specialisation is for insects.” 

I think of analysts as data wizards who help their product teams solve their problems using whatever tools fit the best: whether it’s building an LLM-powered classifier for NPS comments, using causal inference to make strategic decisions, or building a web app to automate workflows.

Rather than specialising in specific skills, I prefer to focus on the problem we’re solving and keep the toolset as broad as possible. This mindset not only leads to better outcomes but also fosters a continuous learning culture, which is essential in today’s fast-moving data industry.

You’ve covered a broad range of topics, from text embeddings and visualizations to simulation and multi AI agent. What writing habit or guiding principle helps you keep your work so cohesive and approachable?

I usually write about topics that excite me at the moment, either because I’ve just learned something new or had an interesting discussion with colleagues. My inspiration often comes from online courses, books or my day-to-day tasks.

When I write, I always think about my audience and how this piece can be genuinely helpful both for others and for my future self. I try to explain all the concepts clearly and leave breadcrumbs for anyone who wants to dig deeper. Over time, my blog has become a personal knowledge base. I often return to old posts: sometimes just to copy a code snippet, sometimes to share a resource with a colleague who’s working on something similar.

As we all know, everything in data is interconnected. Solving a real-world problem often requires a mix of tools and approaches. For example, if you’re estimating the impact of launching in a new market, you might use simulation for scenario analysis, LLMs to explore customer expectations, and visualisation to present the final recommendation.

I try to reflect these connections in my writing. Technologies evolve by building on earlier breakthroughs, and understanding the foundations helps you go deeper. That’s why many of my posts reference each other, letting readers follow their curiosity and uncover how different pieces fit together.

Your articles are impressively structured, often walking readers from foundational concepts to advanced implementations. What’s your process for outlining a complex piece before you start writing?

I believe I developed this way of presenting information in school, as these habits have deep roots. As the book The Culture Map explains, different cultures vary in how they structure communication. Some are concept-first (starting from basics and iteratively moving to conclusions), while others are application-first (starting with results and diving deeper as needed). I’ve definitely internalised the concept-first approach.

In practice, many of my articles are inspired by online courses. While watching a course, I outline the rough structure in parallel so I don’t forget any important nuances. I also note down anything that is unclear and mark it for future reading or experimentation.

After the course, I start thinking about how to apply this knowledge to a practical example. I firmly believe you don’t truly understand something until you try it yourself. Even though most of the courses have practical examples, they are often too polished. So, only when you apply the same ideas for your own use case will you run into edge cases and friction points. For example, the course might use OpenAI models, but I might want to try a local model, or the default system prompt in the framework doesn’t work for my particular case and needs tweaking.

Once I have a working example, I move to writing. I prefer separate drafting from editing. First, I focus on getting all my ideas and code down without worrying about grammar or tone. Then I shift into editing mode: refining the structure, choosing the right visuals, putting together the introduction, and highlighting the key takeaways.

Finally, I read the whole thing end-to-end from the beginning to catch anything I’ve missed. Then I ask my partner to review it. They often bring a fresh perspective and point out things I didn’t consider, which helps make the article more comprehensive and accessible.


To learn more about Mariya‘s work and stay up-to-date with her latest articles, follow her here on TDS and on LinkedIn.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *