around LLMs is now evolving into the hype of Agentic AI. While I hope this article doesn’t fall into an “over‑hyped” category, I personally believe this topic is important to learn. Coming from a data and analytics background, I find that getting familiar with it can be very helpful in day‑to‑day work and to prepare for how it might potentially reshape current processes.
My own journey with Agentic AI is still quite new (after all, it’s a relatively new topic), and I’m still learning along the way. In this series of articles, I’d like to share a beginner‑friendly, step‑by‑step guide to developing Agentic AI based on my personal experience—focusing on the OpenAI Agents SDK framework. Some topics I plan to cover in this series include: tool‑use agents, multi‑agent collaboration, structured output, generating data visualizations, chat features, and more. So stay tuned!
In this article, we’ll start by building a basic agent and then enhance it into a tool‑using agent capable of retrieving data from an API. Finally, we’ll wrap everything in a simple Streamlit UI so users can interact with the agent we build.
Throughout this guide, we’ll stick to a single use case: creating a weather assistant app. I chose this example because it’s relatable for everyone and covers most of the topics I plan to share. Since the use case is simple and generic, you can easily adapt this guide for your own projects.
The link to the GitHub repository and the deployed Streamlit app is provided at the end of this article.
A Brief Intro to OpenAI Agents SDK
The OpenAI Agents SDK is a Python-based framework that allows us to create an agentic AI system in a simple and easy-to-use way [1]. As a beginner myself, I found this statement to be quite true, which makes the learning journey feel less intimidating.
At the core of this framework are “Agents”—Large Language Models (LLMs) that we can configure with specific instructions and tools they can use.
As we already know, an LLM is trained on a vast amount of data, giving it strong capabilities in understanding human language and generating text or images. When combined with clear instructions and the ability to interact with tools, it becomes more than just a generator—it can act and becomes an agent [2].
One practical use of tools is enabling an agent to retrieve factual data from external sources. This means the LLM no longer relies solely on its (often outdated) training data, allowing it to produce more accurate and up‑to‑date results.
In this article, we will focus on this advantage by building an agent that can retrieve “real‑time” data from an API. Let’s get started!
Set Up the Environment
Create a requirements.txt
file containing the following two important packages. I prefer using requirements.txt
for two reasons: reusability and preparing the project for Streamlit deployment.
openai-agents
streamlit
Next, set up a virtual environment named venv
and install the packages listed above. Run the following commands in your terminal:
python −m venv venv
source venv/bin/activate # On Windows: venvScriptsactivate
pip install -r requirements.txt
Lastly, since we will use the OpenAI API to call the LLM, you need to have an API key (get your API key here). Store this key in a .env
file as follows. Important: make sure you add .env
to your .gitignore
file if you are using Git for this project.
OPENAI_API_KEY=your_openai_key_here
Once everything is set up, you’re good to go!
A Simple Agent
Let’s begin with a simple agent by creating a Python file called 01-single-agent.py
.
Import Libraries
The first thing we need to do in the script is import the necessary libraries:
from agents import Agent, Runner
import asyncio
from dotenv import load_dotenv
load_dotenv()
From the Agents SDK package, we use Agent
to define the agent and Runner
to run it. We also import asyncio
to enable our program to perform multiple tasks without waiting for one to finish before starting another.
Lastly, load_dotenv
from the dotenv
package loads the environment variables we defined earlier in the .env
file. In our case, this includes OPENAI_API_KEY
, which will be used by default when we define and call an agent.
Define a Simple Agent
Generated using GraphViz.
Next, we will define a simple agent called Weather Assistant.
agent = Agent(
name="Weather Assistant",
instructions="You provide accurate and concise weather updates based on user queries in plain language."
)
An agent can be defined with several properties. In this simple example, we only configure the name
and the instructions
for the agent. If needed, we can also specify which LLM model to use. For instance, if we want to use a smaller model such as gpt-4o-mini
(currently, the default model is gpt-4o
), we can add the configuration as shown below.
agent = Agent(
name="Weather Assistant",
instructions="You provide accurate and concise weather updates based on user queries in plain language.",
model="gpt-4o-mini"
)
There are several other parameters that we will cover later in this article and in the next one. For now, we will keep the model configuration simple as shown above.
After defining the agent, the next step is to create an asynchronous function that will run the agent.
async def run_agent():
result = await Runner.run(agent, "What's the weather like today in Jakarta?")
print(result.final_output)
The Runner.run(agent, ...)
method calls the agent
with the query “What’s the weather like today in Jakarta?”. The await
keyword pauses the function until the task is complete, allowing other asynchronous tasks (if any) to run in the meantime. The result of this task is stored in the result
variable. To view the output, we print result.final_output
to the terminal.
The last part we need to add is the program’s entry point to execute the function when the script runs. We use asyncio.run
to execute the run_agent
function.
if __name__ == "__main__":
asyncio.run(run_agent())
Run the Simple Agent
Now, let’s run the script in the terminal by executing:
python 01-single-agent.py
The result will most likely be that the agent says it cannot provide the information. This is expected because the LLM was trained on past data and does not have access to real-time weather conditions.
I can’t provide real-time information, but you can check a reliable weather website or app for the latest updates on Jakarta’s weather today.
In the worst case, the agent might hallucinate by returning a random temperature and giving suggestions based on that value. To handle this situation, we will later enforce the agent to call an API to retrieve the actual weather conditions.
Using Trace
One of the useful features of the Agents SDK is Trace, which allows you to visualize, debug, and monitor the workflow of the agent you’ve built and executed. You can access the tracing dashboard here: https://platform.openai.com/traces.
For our simple agent, the trace will look like this:

In this dashboard, you can find useful information about how the workflow is executed, including the input and output of each step. Since this is a simple agent, we only have one agent run. However, as the workflow becomes more complex, this trace feature will be extremely helpful for tracking and troubleshooting the process.
User Interface with Streamlit
Previously, we built a simple script to define and call an agent. Now, let’s make it more interactive by adding a user interface with Streamlit [3].
Let’s create a script named 02-single-agent-app.py
as shown below:
from agents import Agent, Runner
import asyncio
import streamlit as st
from dotenv import load_dotenv
load_dotenv()
agent = Agent(
name="Weather Assistant",
instructions="You provide accurate and concise weather updates based on user queries in plain language."
)
async def run_agent(user_input: str):
result = await Runner.run(agent, user_input)
return result.final_output
def main():
st.title("Weather Assistant")
user_input = st.text_input("Ask about the weather:")
if st.button("Get Weather Update"):
with st.spinner("Thinking..."):
if user_input:
agent_response = asyncio.run(run_agent(user_input))
st.write(agent_response)
else:
st.write("Please enter a question about the weather.")
if __name__ == "__main__":
main()
Compared to the previous script, we now import the Streamlit library to build an interactive app. The agent definition remains the same, but we modify the run_agent
function to accept user input and pass it to the Runner.run
function. Instead of printing the result directly to the console, the function now returns the result.
In the main
function, we use Streamlit components to build the interface: setting the title, adding a text box for user input, and creating a button that triggers the run_agent
function.
The agent’s response is stored in agent_response
and displayed using the st.write
component. To run this Streamlit app in your browser, use the following command:
streamlit run 02-single-agent-app.py

To stop the app, press Ctrl + C
in your terminal.
To keep the article focused on the Agents SDK framework, I kept the Streamlit app as simple as possible. However, that doesn’t mean you need to stop here. Streamlit offers a wide variety of components that allow you to get creative and make your app more intuitive and engaging. For a complete list of components, check the Streamlit documentation in the reference section.
From this point onward, we will continue using this basic Streamlit structure.
A Tool-Use Agent
As we observed in the previous section, the agent struggles when asked about the current weather condition. It may return no information or, worse, produce a hallucinated answer. To ensure our agent uses real data, we can allow it to call an external API so it can retrieve actual information.
This process is a practical example of using Tools in the Agents SDK. In general, tools enable an agent to take actions—such as fetching data, running code, calling an API (as we will do shortly), or even interacting with a computer [1]. Using tools and taking actions is one of the key capabilities that distinguishes an agent from a typical LLM.
Let’s dive into the code. First, create another file named 03-tooluse-agent-app.py
.
Import Libraries
We will need the following libraries:
from agents import Agent, Runner, function_tool
import asyncio
import streamlit as st
from dotenv import load_dotenv
import requests
load_dotenv()
Notice that from the Agents SDK, we now import an additional module: function_tool
. Since we will call an external API, we also import the requests
library.
Define the Function Tool
The API we will use is Open‑Meteo [4], which offers free access for non‑commercial use. It provides many features, including weather forecasts, historical data, air quality, and more. In this article, we will start with the simplest feature: retrieving current weather data.
As an additional note, Open‑Meteo provides its own library, openmeteo‑requests
. However, in this guide I use a more generic approach with the requests
module, with the intention of making the code reusable for other purposes and APIs.
Here is how we can define a function to retrieve the current weather for a specific location using Open-Meteo:
@function_tool
def get_current_weather(latitude: float, longitude: float) -> dict:
"""
Fetches current weather data for a given location using the Open-Meteo API.
Args:
latitude (float): The latitude of the location.
longitude (float): The longitude of the location.
Returns:
dict: A dictionary containing the weather data, or an error message if the request fails.
"""
try:
url = "https://api.open-meteo.com/v1/forecast"
params = {
"latitude": latitude,
"longitude": longitude,
"current": "temperature_2m,relative_humidity_2m,dew_point_2m,apparent_temperature,precipitation,weathercode,windspeed_10m,winddirection_10m",
"timezone": "auto"
}
response = requests.get(url, params=params)
response.raise_for_status() # Raise an error for HTTP issues
return response.json()
except requests.RequestException as e:
return {"error": f"Failed to fetch weather data: {e}"}
The function takes latitude
and longitude
as inputs to identify the location and construct an API request. The parameters include metrics such as temperature, humidity, and wind speed. If the API request succeeds, it returns the JSON response as a Python dictionary. If an error occurs, it returns an error message instead.
To make the function accessible to the agent, we decorate it with @function_tool
, allowing the agent to call it when the user’s query is related to current weather data.
Additionally, we include a docstring in the function, providing both a description of its purpose and details of its arguments. Including a docstring is extremely helpful for the agent to understand how to use the function.
Define a Tool-Use Agent

Generated using GraphViz.
After defining the function, let’s move on to defining the agent.
weather_specialist_agent = Agent(
name="Weather Specialist Agent",
instructions="You provide accurate and concise weather updates based on user queries in plain language.",
tools=[get_current_weather],
tool_use_behavior="run_llm_again"
)
async def run_agent(user_input: str):
result = await Runner.run(weather_specialist_agent, user_input)
return result.final_output
For the most part, the structure is the same as in the previous section. However, since we are now using tools, we need to add some additional parameters.
The first is tools
, which is a list of tools the agent can use. In this example, we only provide the get_current_weather
tool. The next is tool_use_behavior
, which configures how tool usage is handled. For this agent, we set it to "run_llm_again"
, which means that after receiving the response from the API, the LLM will process it further and present it in a clear, easy-to-read format. Alternatively, you can use "stop_on_first_tool"
, where the LLM will not process the tool’s output further. We will experiment with this option later.
The rest of the script follows the same structure we used earlier to build the main Streamlit function.
def main():
st.title("Weather Assistant")
user_input = st.text_input("Ask about the weather:")
if st.button("Get Weather Update"):
with st.spinner("Thinking..."):
if user_input:
agent_response = asyncio.run(run_agent(user_input))
st.write(agent_response)
else:
st.write("Please enter a question about the weather.")
if __name__ == "__main__":
main()
Make sure to save the script, then run it in the terminal:
streamlit run 03-tooluse-agent-app.py
You can now ask a question about the weather in your city. For example, when I asked about the current weather in Jakarta—at the time of writing this (around four o’clock in the morning)—the response was as shown below:

Now, instead of hallucinating, the agent can provide a human‑readable current weather condition for Jakarta. You might recall that the get_current_weather
function requires latitude and longitude as arguments. In this case, we rely on the LLM to supply them, as it is likely trained with basic location information. A future improvement would be to add a tool that retrieves a more accurate geographical location based on a city name.
(Optional) Use “stop_on_first_tool”
Out of curiosity, let’s try changing the tool_use_behavior
parameter to "stop_on_first_tool"
and see what it returns.

As expected, without the LLM’s help to parse and transform the JSON response, the output is harder to read. However, this behavior can be useful in scenarios where you need a raw, structured result without any additional processing by the LLM.
Improved Instruction
Now, let’s change the tool_use_behavior
parameter back to "run_llm_again"
.
As we’ve seen, using an LLM is very helpful for parsing the result. We can take this a step further by giving the agent more detailed instructions—specifically, asking for a structured output and practical suggestions. To do this, update the instructions
parameter as follows:
instructions = """
You are a weather assistant agent.
Given current weather data (including temperature, humidity, wind speed/direction, precipitation, and weather codes), provide:
1. A clear and concise explanation of the current weather conditions.
2. Practical suggestions or precautions for outdoor activities, travel, health, or clothing based on the data.
3. If any severe weather is detected (e.g., heavy rain, thunderstorms, extreme heat), highlight necessary safety measures.
Format your response in two sections:
Weather Summary:
- Briefly describe the weather in plain language.
Suggestions:
- Offer actionable advice relevant to the weather conditions.
"""
After saving the changes, rerun the app. Using the same question, you should now receive a clearer, well‑structured response along with practical suggestions.

The final script of 03-tooluse-agent-app.py
can be seen here.
from agents import Agent, Runner, function_tool
import asyncio
import streamlit as st
from dotenv import load_dotenv
import requests
load_dotenv()
@function_tool
def get_current_weather(latitude: float, longitude: float) -> dict:
"""
Fetches current weather data for a given location using the Open-Meteo API.
Args:
latitude (float): The latitude of the location.
longitude (float): The longitude of the location.
Returns:
dict: A dictionary containing the weather data or an error message if the request fails.
"""
try:
url = "https://api.open-meteo.com/v1/forecast"
params = {
"latitude": latitude,
"longitude": longitude,
"current": "temperature_2m,relative_humidity_2m,dew_point_2m,apparent_temperature,precipitation,weathercode,windspeed_10m,winddirection_10m",
"timezone": "auto"
}
response = requests.get(url, params=params)
response.raise_for_status() # Raise an error for HTTP issues
return response.json()
except requests.RequestException as e:
return {"error": f"Failed to fetch weather data: {e}"}
weather_specialist_agent = Agent(
name="Weather Specialist Agent",
instructions="""
You are a weather assistant agent.
Given current weather data (including temperature, humidity, wind speed/direction, precipitation, and weather codes), provide:
1. A clear and concise explanation of the current weather conditions.
2. Practical suggestions or precautions for outdoor activities, travel, health, or clothing based on the data.
3. If any severe weather is detected (e.g., heavy rain, thunderstorms, extreme heat), highlight necessary safety measures.
Format your response in two sections:
Weather Summary:
- Briefly describe the weather in plain language.
Suggestions:
- Offer actionable advice relevant to the weather conditions.
""",
tools=[get_current_weather],
tool_use_behavior="run_llm_again" # or "stop_on_first_tool"
)
async def run_agent(user_input: str):
result = await Runner.run(weather_specialist_agent, user_input)
return result.final_output
def main():
st.title("Weather Assistant")
user_input = st.text_input("Ask about the weather:")
if st.button("Get Weather Update"):
with st.spinner("Thinking..."):
if user_input:
agent_response = asyncio.run(run_agent(user_input))
st.write(agent_response)
else:
st.write("Please enter a question about the weather.")
if __name__ == "__main__":
main()
Conclusion
At this point, we have explored how to create a simple agent and why we need a tool‑using agent—one powerful enough to answer specific questions about real‑time weather conditions that a simple agent cannot handle. We have also built a simple Streamlit UI to interact with this agent.
This first article focuses only on the core concept of how agentic AI can interact with a tool, rather than relying solely on its training data to generate output.
In the next article, we will shift our focus to another important concept of agentic AI: agent collaboration. We will cover why a multi‑agent system can be more effective than a single “super” agent, and explore different ways agents can interact with each other.
I hope this article has provided helpful insights to start your journey into these topics.
References
[1] OpenAI. (2025). OpenAI Agents SDK documentation. Retrieved July 19, 2025, from https://openai.github.io/openai-agents-python/
[2] Bornet, P., Wirtz, J., Davenport, T. H., De Cremer, D., Evergreen, B., Fersht, P., Gohel, R., Khiyara, S., Sund, P., & Mullakara, N. (2025). Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work, and Life. World Scientific Publishing Co.
[3] Streamlit Inc. (2025). Streamlit documentation. Retrieved July 19, 2025, from https://docs.streamlit.io/
[4] Open-Meteo. Open-Meteo API documentation. Retrieved July 19, 2025, from https://open-meteo.com/en/docs
You can find the complete source code used in this article in the following repository: agentic-ai-weather | GitHub Repository. Feel free to explore, clone, or fork the project to follow along or build your own version.
If you’d like to see the app in action, I’ve also deployed it here: Weather Assistant Streamlit
Lastly, let’s connect on LinkedIn!