. Half a year is already behind us, although it seemed longer, given all the (AI) novelties.
The expectations for AI developments were high for 2025, and a few top predictions that I followed up on at the end of 2024👇🏼:
- (1) “The future is agentic”
- (2) “Security will get tighter and tougher through — of course — AI”
- (3) “It will be the year of AI profit”
Reflecting on last year’s AI prophecies and the new insights I’ve gained, it’s time for a brief retrospective on AI progress so far.
The oracles were (somehow) right.
While it’s fair to admit that AI agents still have (big) drawbacks in areas like security and reliability, it’s equally justifiable to state that this “little” digital entity that relies, among other things, on “people’s spirits” (LLMs) is, simply put, powerful.
Powerful because it’s proactive and can serve as a human glue by independently producing outputs that previously required purely human engagement. On top of this, it can be invisible and deliver a job “quietly in the backend.”
I won’t go into technicalities on how quiet and invisible traits are achieved, but it’s good to mention how, by accessing tools (via MCP) and other agents (via A2A protocol), then leveraging workflows and human “commands” for triggering, agents will change the current standardised task-delivery processes in every white collar job.
This means we will experience the rise of the virtual workforce and “self-driving” business processes. How soon this will arrive, that’s another topic on which no one has a clear consensus yet.
However, the future has already started to look agentic and with tighter security through AI. New agentic projects such as OpenAI’s Operator, GitHub’s Copilot Coding Agent, Google’s Co-scientist and Project Astra, or Microsoft’s Entra Agent ID and Security Copilot Agents, are only a few examples of this.
It is relevant to note that agentic AI is not only a focus of Big Tech. From firsthand experience, I can say it’s also on the horizon of companies that have started positioning AI in their business strategies. In my opinion, based on the specific market knowledge I have, I believe Deloitte’s prophecy won’t be far off at the end of this year:
“25% of enterprises using GenAI are expected to deploy AI agents in 2025, growing to 50% by 2027.”
Of course, the deployment of agents doesn’t necessarily imply their productionization, but rather it covers the development of pilot projects and proof-of-concepts too. Thus, it’s no wonder Gartner predicts over 40% of Agentic AI projects will be cancelled by the same time, i.e., 2027. If we rationalise the fact that every development in Generative AI can be labelled as a research project, this failure rate is explainable.
Adding to this, considering the novelty of AI use cases, the profitability itself is hard to debate for now. I think it’s too early to rely on reports for how much the profit margin can increase when the sample is not big enough, and the end numbers are not fully transparent.
On the other hand, what is already being reported is the impact of AI on entry-level jobs. So, the real question that comes up quite often is…
Will the Jobocalypse happen?
Since I’ve read the AI-2027 scenario, a seed of fear about the future of the workplace has grown a bit faster in me.
For the ones who missed this scenario, it’s a forecast (and only that, so take it cum grano salis) projecting a rapid progression from present-day AI to world-altering superintelligence by 2027.
The story is driven by a high-stakes technological race between the US and China and explores societal and geopolitical consequences, by predicting the next AI developments:
- In mid-2025, the world meets Agent-0, the first generation of AI assistants that are interesting but flawed, requiring constant human oversight.
- Less than a year later, in early 2026, Agent-1 arrives as a commercially successful model that excels at coding but needs humans to manage its workflow and handle any task requiring long-term planning.
- The real acceleration begins in January 2027, when the internal model Agent-2 becomes powerful at automating AI research that its creators keep the agent under wraps. At this point, the primary human advantage has been boiled down to “research taste,” or the intuition for what direction research should take.
- Just two months later, in March 2027, that advantage shrinks further with Agent-3, a system that achieves superhuman coding ability and can automate massive engineering tasks, leaving humans to act as high-level managers.
- The journey reaches its conclusion in October 2027 with Agent-4, a superhuman AI researcher so advanced that human contributions become a bottleneck; it works so fast that a week for the AI is a year of scientific progress, leaving its human creators struggling to even comprehend the discoveries being made.
There’s more to the story, so I would recommend you read it.
Looking at where we are today, halfway through 2025, Agent-0 and traces of Agent-1 capabilities are already present, but their implementation is still highly dependent on the type of task and problem context.
Ok, we can laugh at Anthropic’s Claudius agent for having an identity crisis about whether it’s a human wearing a blue blazer and a red tie, but that and similar anecdotes don’t change the fact today’s AI capabilities are already partially automating tasks once handled by entry-level employees and, in turn, dampening demand for those roles.
This concern was highlighted by a NYT article reporting that unemployment among recent U.S. college graduates has jumped to an unusually high 5.8% in recent months.
With this negative trend, another negative change that can happen in business is a lack of investment in mentorship and coaching programs. A move that would benefit those who like to keep knowledge which is not yet standardised and easily automised by LLMs.
What’s scary in this scenario is the consequences young people could face if the business loses the mentorship culture. If no one shows them understanding or gives them room to fail on low-risk tasks (since AI agents will handle those), will they need to learn decision-making by solving high-stakes tasks without proper monetary compensation?
That said, I feel the stress levels that you, my friends from Gen-Z, are going through now. Because we all know if AI “comes for you” in the next one to five years, it won’t stop there.
We will all be impacted, and the one thing that could ‘save us’ is governmental regulations and legislation, even tighter than the EU AI Act.
Regardless of the talk about how AI will generate new jobs (ideally not the “sin eater” one 👇🏼) or how we will have universal basic income, I hope a jobocalyptic scenario won’t happen if we get protection and play our part, which is getting a set of skills and education necessary for the hybrid (AI+human) market.
With this in mind, I will end today’s post…
Thank You for Reading!
If you found this post valuable, feel free to share it with your network. 👏
Stay connected for more stories on Medium ✍️ and LinkedIn 🖇️.
This post was originally published on Medium in the AI Advances publication.
Credits where credit is due:
The “jobocalyptic” musings in this post were inspired by the following sources: