Home » Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide

Automating Ticket Creation in Jira With the OpenAI Agents SDK: A Step-by-Step Guide

finishing a meeting with a colleague you would already have all your discussed items in your project-management tool? No need for writing anything down during the meeting, nor to manually create corresponding tickets! That was the thought of this short experimental project.

In this step-by-step guide we will create the Python application “TaskPilot” using OpenAI’s Agents SDK to automatically create Jira issues given a meeting transcript.

The Challenge: From Conversation to Actionable Tasks

Given the transcript of a meeting, create issues in a Jira project automatically and corresponding to what was discussed in the meeting.

The Solution: Automating with OpenAI Agents

Using the OpenAI Agents SDK we will implement an agents workflow that:

  1. Receives and reads a meeting transcript.
  2. Uses an AI agent to extract action items from the conversation.
  3. Uses another AI agent to create Jira issues from those action items.
Agent flow: Image created by the author

The OpenAI Agents SDK

The OpenAI Agents SDK is a Python library to create AI agents programmatically that can interact with tools, use MCP-Servers or hand off tasks to other agents.

Here are some of the key features of the SDK:

  • Agent Loop: A built-in agent loop that handles the back-and-forth communication with the LLM until the agent is done with its task.
  • Function Tools: Turns any Python function into a tool, with automatic schema generation and Pydantic-powered validation.
  • MCP Support: Allows agents to use MCP servers to extend its capabilities of interacting with the outside world.
  • Handoffs: Allows agents to delegate tasks to other agents depending on their expertise/role.
  • Guardrails: Validates the inputs and outputs of the agents. Aborts execution early if the agent receives invalid input.
  • Sessions: Automatically manages the conversation history. Ensures that the agents have the context they need to perform their tasks.
  • Tracing: Provides a tracing context manager which allows to visualize the entire execution flow of the agents, making it easy to debug and understand what’s happening under the hood.

Now, let’s dive into the implementation! 


Implementation

We will implement our project in 8 simple steps:

  1. Setting up the project structure
  2. The TaskPilotRunner
  3. Defining our data models
  4. Creating the agents
  5. Providing tools
  6. Configuring the application
  7. Bringing it all together in main.py
  8. Monitoring our runs in the OpenAI Dev Platform

Let’s get hands on!

Step 1: Setting Up the Project Structure

First, let’s create the basic structure of our project:

  • The taskpilot directory: will contain our main application logic.
  • The local_agentsdirectory: will contain where we define the agents we will use in this project (“local_agents” so that there is no interference with the OpenAI library agents)
  • The utils directory: for helper functions, a config parser and data models.
taskpilot_repo/
├── config.yml
├── .env
├── README.md
├── taskpilot/
│   ├── main.py
│   ├── taskpilot_runner.py
│   ├── local_agents/
│   │   ├── __init__.py
│   │   ├── action_items_extractor.py
│   │   └── tickets_creator.py
│   └── utils/
│       ├── __init__.py
│       ├── agents_tools.py
│       ├── config_parser.py
│       ├── jira_interface_functions.py
│       └── models.py

Step 2: The TaskPilotRunner

The TaskPilotRunner class in taskpilot/taskpilot_runner.py will be the heart of our application. It will orchestrate the entire workflow, extracting action items from the meeting transcript and then creating the Jira tickets from the action items. At the same time it will activate the built-in tracing from the Agents SDK to collect a record of events during the agents run that will help for debugging and monitoring the agent workflows. 

Let’s start with the implementation:

  • In the __init__() method we will create the two agents used for this workflow.
  • The run() method will be the most important of the TaskPilotRunner class, which will receive the meeting transcript and pass it to the agents to create the Jira issues. The agents will be started and run within a trace context manager i.e. with trace("TaskPilot run", trace_id): . A trace from the Agents SDK represents a single end-to-end operation of a “workflow”.
  • The _extract_action_items() and _create_tickets() methods will start and run each of the agents respectively. Within these methods the Runner.run() method from the OpenAI Agents SDK will be used to trigger the agents. It takes an agent and an input, and it returns the final output of the agent’s execution. Finally, the result of each agent will be parsed to its defined output type.
# taskpilot/taskpilot_runner.py

from agents import Runner, trace, gen_trace_id
from local_agents import create_action_items_agent, create_tickets_creator_agent
from utils.models import ActionItemsList, CreateIssuesResponse

class TaskPilotRunner:
    def __init__(self):
        self.action_items_extractor = create_action_items_agent()
        self.tickets_creator = create_tickets_creator_agent()

    async def run(self, meeting_transcript: str) -> None:
        trace_id = gen_trace_id()
        print(f"Starting TaskPilot run... (Trace ID: {trace_id})")
        print(
            f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}"
        )

        with trace("TaskPilot run", trace_id=trace_id):
            # 1. Extract action items from meeting transcript
            action_items = await self._extract_action_items(meeting_transcript)

            # 2. Create tickets from action items
            tickets_creation_response = await self._create_tickets(action_items)

            # 3. Return the results
            print(tickets_creation_response.text)

    async def _extract_action_items(self, meeting_transcript: str) -> ActionItemsList:
        result = await Runner.run(
            self.action_items_extractor, input=meeting_transcript
        )
        final_output = result.final_output_as(ActionItemsList)
        return final_output

    async def _create_tickets(self, action_items: ActionItemsList) -> CreateIssuesResponse:
        result = await Runner.run(
            self.tickets_creator, input=str(action_items)
        )
        final_output = result.final_output_as(CreateIssuesResponse)
        return final_output

The three methods are defined as asynchronous functions. The reason for this is that the Runner.run() method from the OpenAI Agents SDK is defined itself as an async coroutine. This allows multiple agents, tool calls, or streaming endpoints to run in parallel without blocking.

Step 3: Defining Our Data Models

Without specific configuration agents return text in str as output. To ensure that our agents provide structured and predictable responses, the library supports the use of Pydantic models for defining the output_type of the agents (it actually supports any type that can be wrapped in a Pydantic TypeAdapter — dataclasses, lists, TypedDict, etc.). The data-models we define will be the data structures that our agents will work with.

For our usecase we will define three models in taskpilot/utils/models.py:

  • ActionItem: This model represents a single action item that is extracted from the meeting transcript.
  • ActionItemsList: This model is a list of ActionItem objects.
  • CreateIssuesResponse: This model defines the structure of the response from the agent that will create the issues/tickets.
# taskpilot/utils/models.py

from typing import Optional
from pydantic import BaseModel

class ActionItem(BaseModel):
    title: str
    description: str
    assignee: str
    status: str
    issuetype: str
    project: Optional[str] = None
    due_date: Optional[str] = None
    start_date: Optional[str] = None
    priority: Optional[str] = None
    parent: Optional[str] = None
    children: Optional[list[str]] = None

class ActionItemsList(BaseModel):
    action_items: list[ActionItem]

class CreateIssuesResponse(BaseModel):
    action_items: list[ActionItem]
    error_messages: list[str]
    success_messages: list[str]
    text: str

Step 4: Creating the Agents

The agents are the core of our application. Agents are basically an LLM configured with instructions (the AGENT_PROMPT) and access to tools for them to act on its own on defined tasks. An agent from the OpenAI Agents SDK is defined by the following parameters:

  • name: The name of the agent for identification.
  • instructions: The prompt to tell the agent its role or task it shall execute (aka. system prompt).
  • model: Which LLM to use for the agent. The SDK provides out-of-the-box support for OpenAI models, however you can also use non-OpenAI models (see Agents SDK: Models).
  • output_type: Python object that the agent shall returned, as mentioned previously.
  • tools: A list of python callables, that will be the tools that the agent can use to perform its tasks. 

Based on this information, let’s create our two agents: the ActionItemsExtractor and the TicketsCreator.

Action Items Extractor

This agent’s job is to read the meeting transcript and extract the action items. We’ll create it in taskpilot/local_agents/action_items_extractor.py

# taskpilot/local_agents/action_items_extractor.py

from agents import Agent
from utils.config_parser import Config
from utils.models import ActionItemsList

AGENT_PROMPT = """
Your are an assistant to extract action items from a meeting transcript.

You will be given a meeting transcript and you need to extract the action items so that they can be converted into tickets by another assistant.

The action items should contain the following information:
    - title: The title of the action item. It should be a short description of the action item. It should be short and concise. This is mandatory.
    - description: The description of the action item. It should be a more extended description of the action item. This is mandatory.
    - assignee: The name of the person who will be responsible for the action item. You shall infer from the conversation the name of the assignee and not use "Speaker 1" or "Speaker 2" or any other speaker identifier. This is mandatory.
    - status: The status of the action item. It can be "To Do", "In Progress", "In Review" or "Done". You shall extract from the transcript in which state the action item is. If it is a new action item, you shall set it to "To Do".
    - due_date: The due date of the action item. It shall be in the format "YYYY-MM-DD".  You shall extract this from the transcript, however if it is not explicitly mentioned, you shall set it to None. If relative dates are mentioned (eg. by tomorrow, in a week,...), you shall convert them to absolute dates in the format "YYYY-MM-DD".
    - start_date: The start date of the action item. It shall be in the format "YYYY-MM-DD". You shall extract this from the transcript, however if it is not explicitly mentioned, you shall set it to None.
    - priority: The priority of the action item. It can be "Lowest", "Low", "Medium", "High" or "Highest". You shall interpret the priority of the action item from the transcript, however if it is not clear, you shall set it to None.
    - issuetype: The type of the action item. It can be "Epic", "Bug", "Task", "Story", "Subtask". You shall interpret the issuetype of the action item from the transcript, if it is unclear set it to "Task".
    - project: The project to which the action item belongs. You shall interpret the project of the action item from the transcript, however if it is not clear, you shall set it to None.
    - parent: If the action item is a subtask, you shall set the parent of the action item to the title of the parent action item. If the parent action item is not clear or the action item is not a subtask, you shall set it to None.
    - children: If the action item is a parent task, you shall set the children of the action item to the titles of the child action items. If the children action items are not clear or the action item is not a parent task, you shall set it to None.
"""

def create_action_items_agent() -> Agent:
    return Agent(
        name="Action Items Extractor",
        instructions=AGENT_PROMPT,
        output_type=ActionItemsList,
        model=Config.get().agents.model,
    )

As you can see, in the AGENT_PROMPT we tell the agent very detailed that its job is to extract action items and provide a detailed description of how we want the action items to be extracted.

Tickets Creator

This agent takes the list of action items and creates Jira issues. We’ll create it in taskpilot/local_agents/tickets_creator.py.

# taskpilot/local_agents/tickets_creator.py

from agents import Agent
from utils.config_parser import Config
from utils.agents_tools import create_jira_issue
from utils.models import CreateIssuesResponse

AGENT_PROMPT = """
You are an assistant that creates Jira issues given action items.

You will be given a list of action items and for each action item you shall create a Jira issue using the `create_jira_issue` tool.

You shall collect the responses of the `create_jira_issue` tool and return them as the provided type `CreateIssuesResponse` which contains:
    - action_items: list containing the action_items that were provided to you
    - error_messages: list containing the error messages returned by the `create_jira_issue` tool whenever there was an error trying to create the issue.
    - success_messages: list containing the response messages returned by the `create_jira_issue` tool whenever the issue creation was successful.
    - text: A text that summarizes the result of the tickets creation. It shall be a string created as following: 
        f"From the {len(action_items)} action items provided {len(success_messages)} were successfully created in the Jira project.n {len(error_messages)} failed to be created in the Jira project.nnError messages:n{error_messages}"
"""

def create_tickets_creator_agent() -> Agent:
    return Agent(
        name="Tickets Creator",
        instructions=AGENT_PROMPT,
        tools=[create_jira_issue],
        model=Config.get().agents.model,
        output_type=CreateIssuesResponse
    )

Here we set the tools parameter and give the agent the create_jira_issue tool, which we’ll create in the next step.

Step 5: Providing Tools

One of the most powerful features of agents is their ability to use tools to interact with the outside world. One could argue that the use of tools is what turns the interaction with an LLM into an agent. The OpenAI Agents SDK allows the agents to use three types of tools:

  • Hosted tools: Provided directly from OpenAI such as searching the web or files, computer use, running code, among others.
  • Function calling: Using any Python function as a tool.
  • Agents as tools: Allowing agents to call other agents without handing off.

For our usecase, we will be using function calling and implement a function to create the Jira issues using Jira’s REST API. By personal choice, I decided to separate it in two files:

  • In taskpilot/utils/jira_interface_functions.py we will write the functions to interact through HTTP Requests with the Jira REST API.
  • In taskpilot/utils/agents_tools.py we will write wrappers of the functions to be provided to the agents. These wrapper-functions have additional response parsing to provide the agent a processed text response instead of a JSON. Nevertheless, the agent should also be able to handle and understand JSON as response.

First we implement the create_issue() function in taskpilot/utils/jira_interface_functions.py : 

# taskpilot/utils/jira_interface_functions.py

import os
from typing import Optional
import json
from urllib.parse import urljoin
import requests
from requests.auth import HTTPBasicAuth
from utils.config_parser import Config

JIRA_AUTH = HTTPBasicAuth(Config.get().jira.user, str(os.getenv("ATLASSIAN_API_KEY")))

def create_issue(
    project_key: str,
    title: str,
    description: str,
    issuetype: str,
    duedate: Optional[str] = None,
    assignee_id: Optional[str] = None,
    labels: Optional[list[str]] = None,
    priority_id: Optional[str] = None,
    reporter_id: Optional[str] = None,
) -> requests.Response:

    payload = {
        "fields": {
            "project": {"key": project_key},
            "summary": title,
            "issuetype": {"name": issuetype},
            "description": {
                "content": [
                    {
                        "content": [
                            {
                                "text": description,
                                "type": "text",
                            }
                        ],
                        "type": "paragraph",
                    }
                ],
                "type": "doc",
                "version": 1,
            },
        }
    }

    if duedate:
        payload["fields"].update({"duedate": duedate})
    if assignee_id:
        payload["fields"].update({"assignee": {"id": assignee_id}})
    if labels:
        payload["fields"].update({"labels": labels})
    if priority_id:
        payload["fields"].update({"priority": {"id": priority_id}})
    if reporter_id:
        payload["fields"].update({"reporter": {"id": reporter_id}})

    endpoint_url = urljoin(Config.get().jira.url_rest_api, "issue")

    headers = {"Accept": "application/json", "Content-Type": "application/json"}

    response = requests.post(
        endpoint_url,
        data=json.dumps(payload),
        headers=headers,
        auth=JIRA_AUTH,
        timeout=Config.get().jira.request_timeout,
    )
    return response

As you can see, we need to authenticate to our Jira account using our Jira user and a corresponding API_KEY that we can obtain on Atlassian Account Management.

In taskpilot/utils/agents_tools.py we implement the create_jira_issue() function, that we will then provide to the TicketsCreator agent:

# taskpilot/utils/agents_tools.py

from agents import function_tool
from utils.models import ActionItem
from utils.jira_interface_functions import create_issue

@function_tool
def create_jira_issue(action_item: ActionItem) -> str:
    
    response = create_issue(
        project_key=action_item.project,
        title=action_item.title,
        description=action_item.description,
        issuetype=action_item.issuetype,
        duedate=action_item.due_date,
        assignee_id=None,
        labels=None,
        priority_id=None,
        reporter_id=None,
    )

    if response.ok:
        return f"Successfully created the issue. Response message: {response.text}"
    else:
        return f"There was an error trying to create the issue. Error message: {response.text}"

Very important: The @function_tool decorator is what makes this function usable for our agent. The agent can now call this function and pass it an ActionItem object. The function then uses the create_issue function which accesses the Jira API to create a new issue.

Step 6: Configuring the Application

To make our application parametrizable, we’ll use a config.yml file for the configuration settings, as well as a .env file for the API keys.

The configuration of the application is separated in:

  • agents: To configure the agents and the access to the OpenAI API. Here we have two parameters: model , which is the LLM that shall be used by the agents, and OPENAI_API_KEY , in the .env file, to authenticate the use of the OpenAI API. You can obtain an OpenAI API Key in your OpenAI Dev Platform.
  • jira: To configure the access to the Jira API. Here we need four parameters: url_rest_api , which is the URL to the REST API of our Jira instance; user , which is the user we use to access Jira; request_timeout , which is the timeout in seconds to wait for the server to send data before giving up, and finally ATLASSIAN_API_KEY , in the .env file, to authenticate to your Jira instance.

Here is our .env file, that in the next step will be loaded to our application in the main.py using the python-dotenv library:

OPENAI_API_KEY=some-api-key
ATLASSIAN_API_KEY=some-api-key

And here is our config.yml file:

# config.yml

agents:
  model: "o4-mini"
jira:
  url_rest_api: "https://your-domain.atlassian.net/rest/api/3/"
  user: "[email protected]"
  request_timeout: 5

We’ll also create a config parser at taskpilot/utils/config_parser.py to load this configuration. For this we implement the Config class as a singleton (meaning there can only be one instance of this class throughout the application lifespan).

# taskpilot/utils/config_parser.py

from pathlib import Path
import yaml
from pydantic import BaseModel

class AgentsConfig(BaseModel):

    model: str

class JiraConfig(BaseModel):

    url_rest_api: str
    user: str
    request_timeout: int

class ConfigModel(BaseModel):

    agents: AgentsConfig
    jira: JiraConfig

class Config:

    _instance: ConfigModel | None = None

    @classmethod
    def load(cls, path: str = "config.yml") -> None:
        if cls._instance is None:
            with open(Path(path), "r", encoding="utf-8") as config_file:
                raw_config = yaml.safe_load(config_file)
            cls._instance = ConfigModel(**raw_config)

    @classmethod
    def get(cls, path: str = "config.yml") -> ConfigModel:
        if cls._instance is None:
            cls.load(path)
        return cls._instance

Step 7: Bringing It All Together in main.py

Finally, in taskpilot/main.py, we’ll bring everything together. This script will load the meeting transcript, create an instance of the TaskPilotRunner , and then call the run() method.

# taskpilot/main.py

import os
import asyncio
from dotenv import load_dotenv

from taskpilot_runner import TaskPilotRunner

# Load the variables in the .env file
load_dotenv()

def load_meeting_transcript_txt(file_path: str) -> str:
    # ...
    return meeting_transcript

async def main():
    print("TaskPilot application starting...")

    meeting_transcript = load_meeting_transcript_txt("meeting_transcript.txt")

    await TaskPilotRunner().run(meeting_transcript)

if __name__ == "__main__":
    asyncio.run(main())

Step 8: Monitoring Our Runs in the OpenAI Dev Platform

As mentioned, one of the advantages of the OpenAI Agents SDK is that, due to its tracing feature, it is possible to visualize the entire execution flow of our agents. This makes it easy to debug and understand what’s happening under the hood in the OpenAI Dev Platform.

In the Traces Dashboard one can:

  • Track each run of the agents workflow.
Screenshot by the author
  • Understand exactly what the agents did within the agent workflow and monitor performance.
Screenshot by the author
  • Debug every call to the OpenAI API as well as monitor how many tokens were used in each input and output.
Screenshot by the author

So take advantage of this feature to evaluate, debug and monitor your agent runs.

Conclusion

And that’s it! In this eight simple steps we have implemented an application that can automatically create Jira issues from a meeting transcript. Thanks to the simple interface of the OpenAI Agents SDK you can easily create agents programmatically to help you automatize your tasks!

Feel free to clone the repository (the project as described in this post is in branch function_calling), try it out for yourself, and start building your own AI-powered applications!


💡 Coming Up Next:

In an upcoming post, we’ll dive into how to implement your own MCP Server to further extend our agents’ capabilities and allow them to interact with external systems beyond your local tools. Stay tuned!

🙋‍♂️ Let’s Connect

If you have questions, feedback, or just want to follow along with future projects:


Reference

This article is inspired by the “OpenAI: Agents SDK” course from LinkedinLearning.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *