I have been encountering some interesting news about how the AI industry is progressing. It feels like a slowdown in this space is definitely on the horizon, if it hasn’t already started. (Not being an economist, I won’t say bubble, but there are lots of opinions out there.) GPT-5 came out last month and disappointed everyone, apparently even OpenAI executives. Meta made a very sudden pivot and is reorganizing its entire AI function, ceasing all hiring, immediately after putting apparently unlimited funds into recruiting and wooing talent in the space. Microsoft appears to be slowing their investment in AI hardware (paywall).
This isn’t to say that any of the major players are going to stop investing in AI, of course. The technology isn’t demonstrating spectacular results or approaching anything even remotely like AGI, which many analysts and writers (including me) had predicted it wouldn’t, but there’s still a level of utilization among businesses and individuals that is persisting, so there’s some incentive to keep pushing forward.
The 5% Success Rate
In this vein, I read the new report from MIT about AI in business with great interest this week. I recommend it to anyone who’s looking for actual information about how AI adoption is going from regular workers as well as the C-suite. The report has some headline takeaways, including an assertion that only 5% of AI initiatives in the business setting generate meaningful value, which I can certainly believe. (Also, AI is not actually taking people’s jobs in most industries, and in several industries AI isn’t having much of an impact at all.) A lot of businesses, it seems, have dived into adopting AI without having a strategic plan for what it’s supposed to do, and how that adoption will actually help them achieve their objectives.
I see this a lot, actually — executives who are significantly separated from the day to day work of their organization being gripped by FOMO about AI, deciding AI must become part of their business, but not stepping back and considering how this fits in with the business they already have and the work they already do.
Screwdriver or Magic Wand?
Regular readers will know I’m not arguing AI can’t or shouldn’t be used when it can serve a purpose, of course. Far from it! I build AI-based solutions to business problems at my own organization every day. However, I firmly believe AI is a tool, not magic. It gives us ways to do tasks that are infeasible for human workers and can accelerate the speed of tasks we would otherwise have to do manually. It can make information clearer and help us better understand lengthy documents and texts.
What it doesn’t do, however, is make business success by itself. In order to be part of the 5% and not the 95%, any application of AI needs to be founded on strategic thinking and planning, and most importantly clear-eyed expectations about what AI is capable of and what it isn’t. Small projects that improve particular processes can have huge returns, without having to bet on a massive upheaval or “revolutionizing” of the business, even though they aren’t as glamorous or headline-producing as the hype. The MIT report discusses how vast numbers of projects start as pilots or experimentation but don’t actually come to fruition in production, and I would argue that a lot of this is because either the planning or the clear-eyed expectations were not present.
The authors spend a significant amount of time noting that many AI tools are regarded as inflexible and/or incompatible with existing processes, resulting in failure to adopt among the rank and file. If you build or buy an AI solution that can’t work with your business as it exists today, you’re throwing away your money. Either the solution should have been designed with your business in mind and it wasn’t, meaning a failure of strategic planning, or it can’t be flexible or compatible in the way you need, and AI simply wasn’t the right solution in the first place.
Trading Security for Versatility
On the subject of flexibility, I had an additional thought as I was reading. The MIT authors emphasize that the internal tools that companies offer their teams often “don’t work” in one way or another, but but in reality a lot of the rigidity and limits placed on in-house LLM tools are because of safety and risk prevention. Developers don’t built non-functional tools on purpose, but they have limitations and requirements to comply with. In short, there’s a tradeoff here we can’t avoid: When your LLM is extremely open and has few or no guardrails, it’s going to feel like it lets the user do more, or will answer more questions, because it does just that. But it does that at a significant possible cost, potentially liability, giving false or inappropriate information, or worse.
Of course, regular users are likely not thinking about this angle when they pull up the ChatGPT app on their phone with their personal account during the work day, they’re just trying to get their jobs done. InfoSec communities are rightly alarmed by this kind of thing, which some circles are calling “Shadow AI” instead of shadow IT. The risks from this behavior can be catastrophic — proprietary company data being handed over to an AI solution freely, without oversight, to say nothing of how the output may be used in the company. This problem is really, really hard to solve. Employee education, at all levels of the organization, is an obvious step, but some degree of this shadow AI is likely to persist, and security teams are struggling with this as we speak.
Conclusion
I think this leaves us in an interesting moment. I believe the winners in the AI rat race are going to be those who were thoughtful and careful, applying AI solutions conservatively, and not trying to upturn their model of success that’s worked up to now to chase a new shiny thing. A slow and steady approach can help hedge against risks, including customer backlash against AI, as well as many others.
Before I close, I just want to remind everyone that these attempts to build the equivalent of a palace when a condo would do fine have tangible consequences. We know that Elon Musk is polluting the Memphis suburbs with impunity by running illegal gas generator powered data centers. Data centers are taking up double-digit percentages of all power generated in some US states. Water supplies are being exhausted or polluted by these same data centers that serve AI applications to users. Let’s remember that the choices we make are not abstract, and be conscientious about when we use AI and why. The 95% of failed AI projects weren’t just expensive in terms of time and money spent by businesses — they cost us all something.
Read more of my work at www.stephaniekirmer.com.
Further Reading
https://garymarcus.substack.com/p/gpt-5-overdue-overhyped-and-underwhelming
https://fortune.com/2025/08/18/sam-altman-openai-chatgpt5-launch-data-centers-investments
https://www.theinformation.com/articles/microsoft-scales-back-ambitions-ai-chips-overcome-delays
https://builtin.com/artificial-intelligence/meta-superintelligence-reorg
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
https://www.ibm.com/think/topics/shadow-ai
https://futurism.com/elon-musk-memphis-illegal-generators
https://www.visualcapitalist.com/mapped-data-center-electricity-consumption-by-state
https://www.eesi.org/articles/view/data-centers-and-water-consumption