Home » Building a Secure and Memory-Enabled Cipher Workflow for AI Agents with Dynamic LLM Selection and API Integration

Building a Secure and Memory-Enabled Cipher Workflow for AI Agents with Dynamic LLM Selection and API Integration

In this tutorial, we walk through building a compact but fully functional Cipher-based workflow. We start by securely capturing our Gemini API key in the Colab UI without exposing it in code. We then implement a dynamic LLM selection function that can automatically switch between OpenAI, Gemini, or Anthropic based on which API key is available. The setup phase ensures Node.js and the Cipher CLI are installed, after which we programmatically generate a cipher.yml configuration to enable a memory agent with long-term recall. We create helper functions to run Cipher commands directly from Python, store key project decisions as persistent memories, retrieve them on demand, and finally spin up Cipher in API mode for external integration. Check out the FULL CODES here.

import os, getpass
os.environ["GEMINI_API_KEY"] = getpass.getpass("Enter your Gemini API key: ").strip()


import subprocess, tempfile, pathlib, textwrap, time, requests, shlex


def choose_llm():
   if os.getenv("OPENAI_API_KEY"):
       return "openai", "gpt-4o-mini", "OPENAI_API_KEY"
   if os.getenv("GEMINI_API_KEY"):
       return "gemini", "gemini-2.5-flash", "GEMINI_API_KEY"
   if os.getenv("ANTHROPIC_API_KEY"):
       return "anthropic", "claude-3-5-haiku-20241022", "ANTHROPIC_API_KEY"
   raise RuntimeError("Set one API key before running.")

We start by securely entering our Gemini API key using getpass so it stays hidden in the Colab UI. We then define a choose_llm() function that checks our environment variables and automatically selects the appropriate LLM provider, model, and key based on what is available. Check out the FULL CODES here.

def run(cmd, check=True, env=None):
   print("▸", cmd)
   p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env)
   if p.stdout: print(p.stdout)
   if p.stderr: print(p.stderr)
   if check and p.returncode != 0:
       raise RuntimeError(f"Command failed: {cmd}")
   return p

We create a run() helper function that executes shell commands, prints both stdout and stderr for visibility, and raises an error if the command fails when check is enabled, making our workflow execution more transparent and reliable. Check out the FULL CODES here.

def ensure_node_and_cipher():
   run("sudo apt-get update -y && sudo apt-get install -y nodejs npm", check=False)
   run("npm install -g @byterover/cipher")

We define ensure_node_and_cipher() to install Node.js, npm, and the Cipher CLI globally, ensuring our environment has all the necessary dependencies before running any Cipher-related commands. Check out the FULL CODES here.

def write_cipher_yml(workdir, provider, model, key_env):
   cfg = """
llm:
 provider: {provider}
 model: {model}
 apiKey: ${key_env}
systemPrompt:
 enabled: true
 content: |
   You are an AI programming assistant with long-term memory of prior decisions.
embedding:
 disabled: true
mcpServers:
 filesystem:
   type: stdio
   command: npx
   args: ['-y','@modelcontextprotocol/server-filesystem','.']
""".format(provider=provider, model=model, key_env=key_env)


   (workdir / "memAgent").mkdir(parents=True, exist_ok=True)
   (workdir / "memAgent" / "cipher.yml").write_text(cfg.strip() + "n")

We implement write_cipher_yml() to generate a cipher.yml configuration file inside a memAgent folder, setting the chosen LLM provider, model, and API key, enabling a system prompt with long-term memory, and registering a filesystem MCP server for file operations. Check out the FULL CODES here.

def cipher_once(text, env=None, cwd=None):
   cmd = f'cipher {shlex.quote(text)}'
   p = subprocess.run(cmd, shell=True, text=True, capture_output=True, env=env, cwd=cwd)
   print("Cipher says:n", p.stdout or p.stderr)
   return p.stdout.strip() or p.stderr.strip()

We define cipher_once() to run a single Cipher CLI command with the provided text, capture and display its output, and return the response, allowing us to interact with Cipher programmatically from Python. Check out the FULL CODES here.

def start_api(env, cwd):
   proc = subprocess.Popen("cipher --mode api", shell=True, env=env, cwd=cwd,
                           stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
   for _ in range(30):
       try:
           r = requests.get("http://127.0.0.1:3000/health", timeout=2)
           if r.ok:
               print("API /health:", r.text)
               break
       except: pass
       time.sleep(1)
   return proc

We create start_api() to launch Cipher in API mode as a subprocess, then repeatedly poll its /health endpoint until it responds, ensuring the API server is ready before proceeding. Check out the FULL CODES here.

def main():
   provider, model, key_env = choose_llm()
   ensure_node_and_cipher()
   workdir = pathlib.Path(tempfile.mkdtemp(prefix="cipher_demo_"))
   write_cipher_yml(workdir, provider, model, key_env)
   env = os.environ.copy()


   cipher_once("Store decision: use pydantic for config validation; pytest fixtures for testing.", env, str(workdir))
   cipher_once("Remember: follow conventional commits; enforce black + isort in CI.", env, str(workdir))


   cipher_once("What did we standardize for config validation and Python formatting?", env, str(workdir))


   api_proc = start_api(env, str(workdir))
   time.sleep(3)
   api_proc.terminate()


if __name__ == "__main__":
   main()

In main(), we select the LLM provider, install dependencies, and create a temporary working directory with a cipher.yml configuration. We then store key project decisions in Cipher’s memory, query them back, and finally start the Cipher API server briefly before shutting it down, demonstrating both CLI and API-based interactions.

In conclusion, we have a working Cipher environment that securely manages API keys, selects the right LLM provider automatically, and configures a memory-enabled agent entirely through Python automation. Our implementation includes decision logging, memory retrieval, and a live API endpoint, all orchestrated in a Notebook/Colab-friendly workflow. This makes the setup reusable for other AI-assisted development pipelines, allowing us to store and query project knowledge programmatically while keeping the environment lightweight and easy to redeploy.


Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *