This article is an excerpt from my upcoming book about how data scientists can not just survive the AI wave, but use it to level up their careers. If you’d like to hear when it’s ready, please join the waitlist here!
that junior and even mid-level data scientists take pride in—the stuff that makes them feel smart, technical, and irreplaceable—is now baseline automation. The uncomfortable question isn’t whether AI can do these things as good as a human, it’s whether a human is still adding value by doing them himself. AI is already good enough for most of the routine work that fills a typical data scientist’s day; and in business, good enough at 10% the cost and 1% the time usually wins.
Most Data Scientists are Still Optimizing for Accuracy
Clinging to your familiar workflows isn’t just inefficient, it’s soon to be a career risk. While you’re perfecting your pandas syntax, your peers are learning to ask better questions, influence real business decisions, and optimize for impact. They’re not trying to outrun the robots; they’re doing the work robots can’t do.
Most data scientists are trained for academic rigor. In school or in Kaggle competitions, we got neat and tidy data sets, and set out to build ML models with incrementally better accuracy. We’d been trained to chase clean answers, statistical significance, and low error rates. Once we got our first job, we discovered that clean data rarely exists, and the first 90% of each project would be spent just setting up our data.
Gen AI is forcing data scientists into another similar shift of mindset. Much of the work that has been our bread-and-butter can be done by AI now; maybe not as good as we could do it, but certainly good enough, and much faster at that. This isn’t a threat though, it’s an opportunity. What your manager is thinking, or maybe your manager’s manager, is that business impact > technical precision. Delivering a perfect visualization or insight isn’t enough anymore. If you’re not driving decisions, then you are replaceable.
The Shift from Outputs to Outcomes
To thrive in this new AI era, data scientists must become more strategic. They must start to think like a product manager thinks. This is what I mean by “optimize for impact.” Start with the decision, not the deliverable, and work backward. Focus on actionability, even at the expense of exhaustiveness. Communicate tradeoffs, interpretations, and recommendations with each piece of work.
PMs start from the business goal, and then start working towards a decision. As a data scientist, I would often start from the data and ask what questions I could answer with it—this isn’t the way forward anymore. A PM is a ruthless prioritizer: What actually moves the needle? What won’t get done if we spend time on this? A PM thinks in terms of tradeoffs, constraints, and leverage; they care more about impact than elegance. Data is a tool, not the destination.
I’ve identified five concrete changes of mindset that every data scientist can learn from PMs. A data scientist who becomes proficient in these skills will become defensible from encroachment by AI. This isn’t necessarily a playbook to a promotion to Staff+ levels (although it can be—there is a lot of overlap), but a framework to capitalize on AI’s weaknesses.
Five Tactical Actions to Start Optimizing for Impact
1. Start with the Decision, Not the Data
Most data scientists open a new project by pulling a dataset. It’s comforting to poke around the numbers, explore the shape of the data, and see what interesting trends emerge; it gives the satisfying feeling (or more accurately, the illusion) of progress at the start of an ambiguous project. But this is how you end up with dashboards no one looks at and models that never get used. If you want your work to matter, you need to start with three questions:
- What decision will this inform?
- What action might it change?
- What happens if we do nothing?
Imagine you’re asked to analyze a user drop-off funnel. A junior data scientist might build a beautiful funnel chart, break it down by platform and region, and maybe even segment it by monthly cohorts. But then what? What decision does that analysis support? What action can the team take? The better question might have been: “What would we do differently if we learned that Android users drop off more?” Maybe the right decision is whether to invest engineering time in fixing the Android onboarding flow. That’s the business context you need before writing a single line of SQL.
Before you run your first query, align with your stakeholders on the decision they’re trying to make. If your analysis doesn’t lead to a decision, it’s wasted time. Start at the end and work backward. The key is actionability.
Why it differentiates you from AI: LLMs can analyze data, but they can’t understand org dynamics, anticipate political resistance, or choose which battles are worth fighting. That’s judgment—human territory.
2. Prioritize Projects That Move the Roadmap
Data scientists often fall into the trap of chasing interesting questions. But interesting is not the same as important. I once spent two weeks writing a 50-page analysis documenting user activity in an emerging line of behaviors; but although everyone in the org read it, no one did anything. I was proud of the result, but it wasn’t actionable. That was a wake-up call.
If you want to be indispensable, get close to the roadmap. What are the 2–3 bets the company is making this quarter? What’s the PM losing sleep over? What open question is blocking the next big initiative? Anchor your projects to these. If your work ties directly to a team’s goals, it’s far more likely to drive action—and far more likely to be seen by leadership.
A strategic project has five key characteristics:
First, it influences a key product or business decision. This isn’t just about providing data to inform a decision, it’s about providing data that actually changes what gets decided. Strategic projects surface insights that make stakeholders think, “We need to reconsider our approach here.”
Second, it’s tied to roadmap planning or resourcing. Strategic work feeds into quarterly planning cycles, annual budgeting processes, or major product launches. It’s the analysis that gets referenced in leadership meetings when teams are deciding what to build next.
Third, it surfaces tradeoffs or uncertainty in product direction. Strategic projects don’t just confirm what everyone already believes. They reveal hidden assumptions, quantify difficult tradeoffs, or expose blind spots in the team’s thinking. They make the invisible visible.
Fourth, it generates reusable artifacts. Strategic projects create metrics, models, frameworks, or insights that get leveraged by other teams or in future decisions. They’re not one-time analyses that disappear into the ether.
Fifth, it raises the ceiling of decision-making for others. Strategic work elevates conversations from tactical (“Should we change the button color?”) to strategic (“What does user engagement actually mean for our business?”).
Smart data scientists don’t just answer interesting questions. They answer valuable ones. The difference between a junior analyst and a strategic data scientist is their ability to identify work that actually matters; work that influences decisions, changes minds, and moves the business forward.
Why it differentiates you from AI: AI can surface insights, but only you can see the product landscape, navigate tradeoffs, and strategically insert yourself where real leverage lives.
3. Define Metrics that Reflect the Business and Incentivize the Right Behaviors
Every metric encodes assumptions, priorities, and tradeoffs. The question isn’t “what can we measure?” but “what should we optimize for?” This distinction separates strategic data scientists from tactical ones. Tactical data scientists take metrics as given. “Marketing wants to improve conversion rates? Great, let’s measure conversion rates.” Strategic data scientists ask deeper questions: “What does the business actually care about? What behaviors do we want to encourage? What could backfire if we optimize for this?” Many shady subscription services (in)famously work to drive down cancellations… by making it harder to cancel. That’s not insight, it’s misaligned incentives.
It’s your job to define metrics that guide good decisions. That means starting from the business objective and working backward. What does success actually look like? What behavior do we want to encourage? What leading indicator can we use to detect problems early? And what’s the dark side of optimizing for this metric?
Sort your metrics into four layers. North Star metrics define long-term success and align the entire company—like YouTube tracking Weekly Active Creators to center their mission around content production. Supporting metrics break down and drive movement in the north star, surfacing where strategic action is most needed—like creator retention or uploads per creator. Guardrail metrics prevent unintended harm while optimizing, ensuring quality and trust stay intact even under aggressive growth. And operational metrics keep the system running day-to-day—essential for execution, but not where strategy lives.
Most data scientists will be devising the supporting metrics—these are the ones which provide signal in an experiment. A great supporting metric does three things: reflects reality, influences behavior, and is sensitive to change. Getting this right means working closely with PMs, engineers, and ops to understand the full system.
Why it differentiates you from AI: AI can optimize a number, but only you can question whether it’s the right number. Defining success is a political, strategic, and human act influenced by the nuance of your specific business.
4. Match the Analytical Approach to the Decision Risk and Value
Before you write a single line of code or draft an experimental design, step back and ask four strategic questions: How fast do we need an answer? How wrong can we afford to be? What constraints do we face? And what decision hangs on this analysis? These questions—timeline, precision, feasibility, and impact—form a mental model for choosing the right analytical approach for the situation at hand.
Timeline is often the dominant constraint. If leadership needs a decision by Friday, you’re not running a gold-standard experiment. You’re using historical data, descriptive metrics, or synthetic comparisons to make an informed call fast.
Precision is about risk tolerance: bet-the-company decisions require rigorous testing and large samples; a button color change might only need a directional signal. Don’t over-engineer when stakes are low—and don’t under-engineer when stakes are high.
Feasibility reminds us that real-world analytics happens under constraints—traffic, tooling, org politics, data access. But those constraints aren’t blockers; they’re design parameters. Your workaround could become your competitive advantage.
And finally, Impact is about ruthless prioritization. Your most sophisticated methods should support your most strategic decisions. If an analysis won’t affect what gets built, funded, or killed, you’re either wasting time or avoiding a harder conversation.
There is no “best” method in the abstract. The best method is the one that fits your constraints and drives the decision forward. To match method to moment, ask yourself: What’s the cost of a false positive? A false negative? What decision will this analysis inform, and how reversible is that decision? A one-way door needs rigor. A two-way door needs speed. If it’s a million-dollar bet, get tight estimates. If it’s a UX tweak, ship it and monitor over the next week or two.
Good analysis isn’t just accurate—it’s appropriately scoped for the decision it supports.
Why it differentiates you from AI: AI can calculate statistical significance, but it can’t weigh business risk or adjust for what’s at stake. You’re not just running tests, you’re managing consequences and you need business context from your experience.
5. Turn Insights Into Action, Not Just Understanding
The biggest sin in data science isn’t being wrong—it’s being irrelevant. A clever model or rich insight means nothing if it doesn’t change what the business does next. I’ve been guilty of this: presenting a polished analysis, getting polite nods, and walking away thinking I’d done my job. But the real test is whether your work moves someone to act.
This means your work needs to come with a point of view. If you’re only showing what happened, then you’re not completing the task. You also need to recommend what to do next. Be clear about tradeoffs, risks, and confidence levels. Show how this insight ties to a decision the team is facing right now. Even better, co-create the action plan with your stakeholder before you ever open a slide deck.
One trick: write the recommendation slide before you start the analysis. If you can’t imagine what action would result from your work, don’t do it. Data science is only as valuable as the decisions it enables.
Why it differentiates you from AI: AI can find patterns. You connect them to strategy, urgency, and ownership—then get people to actually act. That’s what drives impact.
Strategy is Your New Job
You have a choice to make. You can continue doing the same type of work you’ve always done, hoping that someone else will recognize your value and give you opportunities for strategic impact. Or you can proactively develop strategic capabilities, position yourself for high-impact work, and help define what the future of data science looks like.
The first path is comfortable but risky. The second path is challenging but rewarding.
The five mindset shifts we’ve discussed aren’t just survival tactics. They’re career accelerators. Data scientists who master these skills don’t just become AI-proof; they become indispensable strategic partners who drive real business outcomes (and prime promotion candidates!).
Start small, but start now. Pick one project this quarter and apply the decision-first framework. Choose one metric you’re currently tracking and ask whether it’s driving the right behaviors. Take one analysis you’ve completed and ask yourself: “What action should we take based on this?”
Building strategic capabilities takes time. You won’t become a strategic data scientist overnight, and you shouldn’t expect to. But every month you spend developing business acumen, every quarter you spend building cross-functional relationships, every year you spend taking on more strategic work will compound over and over.
The AI revolution is already here. The question isn’t whether your role will change, but whether you’ll lead that change or let it happen to you. Your technical foundation is solid. Now it’s time to build strategic thinking on top of it. The future belongs to data scientists who can do both—and the future starts with your next project.
Did this post ignite your curiosity about becoming a more strategic data scientist? Join the waitlist for The Strategic Data Scientist: How to Level Up and Thrive in the Age of AI. Learn the frameworks, mindsets, and tactics Strategic Data Scientists use to drive impact without managing people; and discover how to work with AI as a strategic co-pilot, not a replacement.