Home » An Implementation Guide to Design Intelligent Parallel Workflows in Parsl for Multi-Tool AI Agent Execution

An Implementation Guide to Design Intelligent Parallel Workflows in Parsl for Multi-Tool AI Agent Execution

In this tutorial, we implement an AI agent pipeline using Parsl, leveraging its parallel execution capabilities to run multiple computational tasks as independent Python apps. We configure a local ThreadPoolExecutor for concurrency, define specialized tools such as Fibonacci computation, prime counting, keyword extraction, and simulated API calls, and coordinate them through a lightweight planner that maps a user goal to task invocations. The outputs from all tasks are aggregated and passed through a Hugging Face text-generation model to produce a coherent, human-readable summary. Check out the FULL CODES here.

!pip install -q parsl transformers accelerate


import math, json, time, random
from typing import List, Dict, Any
import parsl
from parsl.config import Config
from parsl.executors import ThreadPoolExecutor
from parsl import python_app


parsl.load(Config(executors=[ThreadPoolExecutor(label="local", max_threads=8)]))

We begin by installing the required libraries & importing all necessary modules for our workflow. We then configure Parsl with a local ThreadPoolExecutor to run tasks concurrently and load this configuration so we can execute our Python apps in parallel. Check out the FULL CODES here.

@python_app
def calc_fibonacci(n: int) -> Dict[str, Any]:
   def fib(k):
       a, b = 0, 1
       for _ in range(k): a, b = b, a + b
       return a
   t0 = time.time(); val = fib(n); dt = time.time() - t0
   return {"task": "fibonacci", "n": n, "value": val, "secs": round(dt, 4)}


@python_app
def extract_keywords(text: str, k: int = 8) -> Dict[str, Any]:
   import re, collections
   words = [w.lower() for w in re.findall(r"[a-zA-Z][a-zA-Z0-9-]+", text)]
   stop = set("the a an and or to of is are was were be been in on for with as by from at this that it its if then else not no".split())
   cand = [w for w in words if w not in stop and len(w) > 3]
   freq = collections.Counter(cand)
   scored = sorted(freq.items(), key=lambda x: (x[1], len(x[0])), reverse=True)[:k]
   return {"task":"keywords","keywords":[w for w,_ in scored]}


@python_app
def simulate_tool(name: str, payload: Dict[str, Any]) -> Dict[str, Any]:
   time.sleep(0.3 + random.random()*0.5)
   return {"task": name, "payload": payload, "status": "ok", "timestamp": time.time()}

We define four Parsl @python_app functions that run asynchronously as part of our agent’s workflow. We create a Fibonacci calculator, a prime-counting routine, a keyword extractor for text processing, and a simulated tool that mimics external API calls with randomized delays. These modular apps let us perform diverse computations in parallel, forming the building blocks for our multi-tool AI agent. Check out the FULL CODES here.

def tiny_llm_summary(bullets: List[str]) -> str:
   from transformers import pipeline
   gen = pipeline("text-generation", model="sshleifer/tiny-gpt2")
   prompt = "Summarize these agent results clearly:n- " + "n- ".join(bullets) + "nConclusion:"
   out = gen(prompt, max_length=160, do_sample=False)[0]["generated_text"]
   return out.split("Conclusion:", 1)[-1].strip()

We implement a tiny_llm_summary function that uses Hugging Face’s pipeline with the lightweight sshleifer/tiny-gpt2 model to generate concise summaries of our agent’s results. It formats the collected task outputs as bullet points, appends a “Conclusion:” cue, and extracts only the final generated conclusion for a clean, human-readable summary. Check out the FULL CODES here.

def plan(user_goal: str) -> List[Dict[str, Any]]:
   intents = []
   if "fibonacci" in user_goal.lower():
       intents.append({"tool":"calc_fibonacci", "args":{"n":35}})
   if "primes" in user_goal.lower():
       intents.append({"tool":"count_primes", "args":{"limit":100_000}})
   intents += [
       {"tool":"simulate_tool", "args":{"name":"vector_db_search","payload":{"q":user_goal}}},
       {"tool":"simulate_tool", "args":{"name":"metrics_fetch","payload":{"kpi":"latency_ms"}}},
       {"tool":"extract_keywords", "args":{"text":user_goal}}
   ]
   return intents

We define the plan function to map a user’s goal into a structured list of tool invocations. It checks the goal text for keywords like “fibonacci” or “primes” to trigger specific computational tasks, then adds default actions such as simulated API queries, metrics retrieval, and keyword extraction, forming the execution blueprint for our AI agent. Check out the FULL CODES here.

def run_agent(user_goal: str) -> Dict[str, Any]:
   tasks = plan(user_goal)
   futures = []
   for t in tasks:
       if t["tool"]=="calc_fibonacci": futures.append(calc_fibonacci(**t["args"]))
       elif t["tool"]=="count_primes": futures.append(count_primes(**t["args"]))
       elif t["tool"]=="extract_keywords": futures.append(extract_keywords(**t["args"]))
       elif t["tool"]=="simulate_tool": futures.append(simulate_tool(**t["args"]))
   raw = [f.result() for f in futures]


   bullets = []
   for r in raw:
       if r["task"]=="fibonacci":
           bullets.append(f"Fibonacci({r['n']}) = {r['value']} computed in {r['secs']}s.")
       elif r["task"]=="count_primes":
           bullets.append(f"{r['count']} primes found ≤ {r['limit']}.")
       elif r["task"]=="keywords":
           bullets.append("Top keywords: " + ", ".join(r["keywords"]))
       else:
           bullets.append(f"Tool {r['task']} responded with status={r['status']}.")


   narrative = tiny_llm_summary(bullets)
   return {"goal": user_goal, "bullets": bullets, "summary": narrative, "raw": raw}

In the run_agent function, we execute the full agent workflow by first generating a task plan from the user’s goal, then dispatching each tool as a Parsl app to run in parallel. Once all futures are complete, we convert their results into clear bullet points and feed them to our tiny_llm_summary function to create a concise narrative. The function returns a structured dictionary containing the original goal, detailed bullet points, the LLM-generated summary, and the raw tool outputs. Check out the FULL CODES here.

if __name__ == "__main__":
   goal = ("Analyze fibonacci(35) performance, count primes under 100k, "
           "and prepare a concise executive summary highlighting insights for planning.")
   result = run_agent(goal)
   print("n=== Agent Bullets ===")
   for b in result["bullets"]: print("•", b)
   print("n=== LLM Summary ===n", result["summary"])
   print("n=== Raw JSON ===n", json.dumps(result["raw"], indent=2)[:800], "...")

In the main execution block, we define a sample goal that combines numeric computation, prime counting, and summary generation. We run the agent on this goal, print the generated bullet points, display the LLM-crafted summary, and preview the raw JSON output to verify both the human-readable and structured results.

In conclusion, this implementation demonstrates how Parsl’s asynchronous app model can efficiently orchestrate diverse workloads in parallel, enabling an AI agent to combine numerical analysis, text processing, and simulated external services in a unified pipeline. By integrating a small LLM at the final stage, we transform structured results into natural language, illustrating how parallel computation and AI models can be combined to create responsive, extensible agents suitable for real-time or large-scale tasks.


Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post An Implementation Guide to Design Intelligent Parallel Workflows in Parsl for Multi-Tool AI Agent Execution appeared first on MarkTechPost.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *