
Image by Editor | ChatGPT
# Introduction
Agentic AI is undoubtedly one of the most buzzworthy terms of the year. While not inherently a new paradigm within the umbrella of artificial intelligence, the term has gained renewed popularity largely due to its symbiotic relationship with large language models (LLMs) and other generative AI systems, which unlock many practical limitations that both standalone LLMs and earlier autonomous agents had to face.
This article explores 10 agentic AI terms and concepts that are key to understanding the latest AI paradigm everyone wants to talk about — but not everyone clearly understands.
# 1. Agentic AI
Definition: Agentic AI can be defined as a branch of AI that studies and develops AI entities (agents) capable of making decisions, planning actions, and executing tasks largely by themselves, with minimal human intervention required.
Why it’s key: Unlike other kinds of AI systems, agentic AI systems are designed to operate without the need for continuous human oversight, interactions, or adjustments, facilitating high-level automation of complex, multi-step workflows. This can become very advantageous in sectors like marketing, logistics, and traffic control, among many others.
# 2. Agent
Definition: An AI agent, or agent for short, is a software entity that can continuously perceive information from its environment (physical or digital), reason about it, and autonomously take actions aimed at achieving specific goals. This often entails interacting with data sources or other systems and tools.
Why it’s key: Agents are the building blocks of agentic AI. They drive autonomy by combining the perception of data inputs or signals, reasoning, decision-making, and action. They learn to break down complex tasks to handle them more efficiently, eliminating the need for constant human guidance. This is normally done by applying three key stages that we will cover in the next three definitions: perception, reasoning, and action.
# 3. Perception
Definition: In the context of agentic AI, perception is the process of collecting and interpreting information from the environment. For instance, in a multimodal LLM setting, this involves processing inputs like images, audio, or structured data and mapping them into an internal representation of the current context or state of the environment.
Why it’s key: Agentic AI systems are endowed with advanced perception skills based on real-time data analysis to comprehend their environment’s status at any given time.
# 4. Reasoning
Definition: Once input information has been perceived, an AI agent proceeds to the reasoning stage, involving cognitive processes by which the agent draws conclusions, makes decisions, or addresses problems based on the perceived information, as well as prior knowledge it may already have. For example, using a multimodal LLM, an AI agent’s reasoning would entail interpreting a satellite image that shows traffic congestion in a city, cross-referencing it with historical traffic data and live feeds, and determining optimal diversion strategies for rerouting vehicles.
Why it’s key: Thanks to the reasoning stage, the agent can make plans, infer, and select actions that are more likely to achieve desired goals. This is often done by allowing the agent to invoke a machine learning model for specific tasks like classification and prediction.
# 5. Action
Definition: More often than not, decision-making as a result of reasoning is not the end of the AI agent’s problem-solving workflow. Instead, the decision made is a “call to action”, which may involve interacting with end users through natural language responses, modifying data accessible by the agent such as updating a store inventory database in real time upon sales, or automatically triggering processes such as adjusting energy output in a smart grid as a result of demand predictions or unexpected fluctuations.
Why it’s key: Actions are usually where the real value of AI agents is truly perceived, and action mechanisms or protocols reveal how agents produce tangible results and apply changes with potential impact on their environment.
# 6. Tool Use
Definition: Another commonly used term in the realm of agentic AI is tool use, which refers to agents’ ability to call external services by themselves. Most modern agentic AI systems utilize and communicate with tools such as APIs, databases, search engines, code execution environments, or other software systems to amplify their range of functionalities far beyond built-in capabilities.
Why it’s key: Thanks to tool use, AI agents can leverage ever-evolving, specialized systems and resources, turning them into highly versatile and effective tools with a wider scope of tasks they can do.
# 7. Context Engineering
Definition: Context engineering is a design and management-centered process of carefully curating the information an agent perceives to optimize its performance in effectively executing intended tasks, aiming to maximize the relevance and reliability of the results produced. In the context of LLMs equipped with agentic AI, this means going far beyond human-driven prompt engineering and providing the right context, tools, and prior knowledge at the right moment.
Why it’s key: Carefully engineered context helps agents acquire the most useful and relevant data for effective and accurate decision-making and action.
# 8. Model Context Protocol (MCP)
Definition: Model Context Protocol (MCP) is a communication protocol widely used in agentic AI systems. It is designed to facilitate interaction among agents and other components that utilize language models and other AI-based systems.
Why it’s key: MCP is to a great extent responsible for the recent agentic AI revolution, by providing structure and standardized approaches to facilitate transparent communication among different systems, applications, and interfaces, without depending on a specific model. It is also robust against constant changes to components in the system.
# 9. LangChain
Definition: Although not exclusively agentic AI-related, the popular open-source framework LangChain for LLM-powered application development has embraced agentic AI to the point of becoming one of today’s most utilized agentic AI frameworks. LangChain provides support for chaining prompts, external tool use, memory management, and, of course, building AI agents that leverage automation to support the execution of the aforementioned tasks in LLM applications.
Why it’s key: LangChain provides a dedicated infrastructure to build complex, efficient, multi-step LLM workflows integrated with agentic AI.
# 10. AgentFlow
Definition: Another framework gaining increasing popularity in recent days is AgentFlow. It places emphasis on code-free, modular agent-building assistants. Using a visual interface, it is possible to create and configure workflows — or simply flows, hence the framework’s name — that can be easily utilized by AI agents to perform complex tasks autonomously.
Why it’s key: Customization is a key factor in AgentFlow, helping businesses in several sectors create, monitor, and orchestrate advanced AI agents with personalized capabilities and settings.
Note: At the time of writing, AgentFlow is a very recently emerging term that is being used by several companies to name agentic AI frameworks whose characteristics align with those we just described, although this may quickly evolve.
# Wrapping Up
This article examined the significance of ten key terms surrounding one of today’s most rapidly emerging fields within AI: agentic AI. Based on the concept of agents capable of performing a wide range of tasks by themselves, we described and demystified several terms related to the process, methods, protocols, and common frameworks surrounding agentic AI systems.
Iván Palomares Carrascosa is a leader, writer, speaker, and adviser in AI, machine learning, deep learning & LLMs. He trains and guides others in harnessing AI in the real world.