Home » MCP Client Development with Streamlit: Build Your AI-Powered Web App

MCP Client Development with Streamlit: Build Your AI-Powered Web App

In “Model Context Protocol (MCP) Tutorial: Build Your First MCP Server in 6 Steps”, we introduced the MCP architecture and explored MCP servers in detail. We will continue our MCP exploration in this tutorial, by building an interactive MCP client interface using Streamlit. The main difference between an MCP server and an MCP client is that MCP server provides functionalities by connecting to a diverse range of tools and resources, whereas MCP client leverages these functionalities through an interface. Streamlit, a lightweight Python library for data-driven interactive web applications development, accelerates the development cycle and abstracts away frontend frameworks, making it an optimal choice for rapid prototyping and the streamlined deployment of AI-powered tools. Therefore, we are going to use Streamlit to construct our MCP client user interface through a minimal setup, while focusing on connecting to remote MCP servers for exploring diverse AI functionalities.

Project Overview

Create an interactive web app prototype where users can enter their topics of interest and choose between two MCP servers—DeepWiki and HuggingFace—that provide relevant resources. DeepWiki specializes in summarizing codebases and GitHub repositories, while the HuggingFace MCP server provides recommendations of open-source datasets and models related to the user’s interested topics. The image below displays the web app’s output for the topic “sentiment analysis”.

To develop the Streamlit MCP client, we will break it down into the following steps:

  • Set Up Development Environment
  • Initialize Streamlit App Layout
  • Get User Inputs
  • Connect to Remote MCP Servers
  • Generate Model Responses
  • Run the Streamlit App

Set Up Development Environment

Firstly, let’s set up our project directory using a simple structure.

mcp_streamlit_client/
├── .env                  # Environment variables (API keys)
├── README.md             # Project documentation
├── requirements.txt      # Required libraries and dependencies
└── app.py                # Main Streamlit application

Then install necessary libraries – we need streamlit for building the web interface and openai for interacting with OpenAI’s API that supports MCP.

pip install streamlit openai

Alternatively, you can create a requirements.txt file to specify the library versions for reproducible installations by running:

pip install -r requirements.txt

Secondly, secure your API Keys using environment variables. When working with LLM providers like OpenAI, you will need to set up an API key. To keep this key confidential, the best practice is to use environment variables to load the API key and avoid hard coding it directly into your script, especially if you plan to share your code or deploy your application. To do this, add your API keys in the .env file using the following format. We will also need Hugging Face API token to access its remote MCP server.

OPENAI_API_KEY="your_openai_api_key_here" 
HF_API_KEY="your_huggingface_api_key_here" 

Now, in the script app.py, you can load these variables into your application’s environment using load_dotenv() from the dotenv library. This function reads the key-value pairs from your .env file and makes them accessible via os.getenv().

from dotenv import load_dotenv
import os

load_dotenv() 

# access HuggingFace API key using os.getenv()
HF_API_KEY = os.getenv('HF_API_KEY')

Connect MCP Server and Client

Before diving into MCP client development, let’s understand the basics of establishing an MCP server-client connection. With the growing popularity of MCP, an increasing number of LLM providers now support MCP client implementation. For example, OpenAI offers a straightforward initialization method using the code below.

from openai import OpenAI

client = OpenAI()
Further Reading:

The article “For Client Developers” provides the example to set up an Anthropic MCP client, which is slightly complicated but also more robust as it enables better session and resource management.

To connect the client to an MCP server, you’ll need to implement a connection method that takes either a server script path (for local MCP servers) or URL (for remote MCP servers) as the input. Local MCP servers are programs that run on your local machine, while remote MCP servers are deployed online and accessible through a URL. For example below, we are connecting to a remote MCP server “DeepWiki” through “https://mcp.deepwiki.com/mcp”.

response = client.responses.create(
    model="gpt-4.1",
    tools=[
        {
            "type": "mcp",
            "server_label": "deepwiki",
            "server_url": "https://mcp.deepwiki.com/mcp",
            "require_approval": "never",
        },
    ]
)
Further Readings:

You can also explore other MCP server options in this comprehensive “Remote MCP Servers Catalogue” for your specific needs. The article “For Client Developers” also provides example to connect local MCP servers.

Build the Streamlit MCP Client

Now that we understand the fundamentals of establishing connections between MCP clients and servers, we’ll encapsulate this functionality within a web interface for enhanced user experience. This web app is designed with modularity in mind, composed of several elements implemented using Streamlit methods, such as st.radio(), st.button(), st.info(), st.title() and st.text_area().

1. Initialize Your Streamlit Page

We will start with initialize_page() function that sets the page icon and title, and uses layout="centered" to ensure the overall web app layout to be aligned in the center. This function returns a column object underneath the page title where we will place widgets shown in the following steps.

import streamlit as st

def initialize_page():
    """Initialize the Streamlit page configuration and layout"""
    st.set_page_config(
        page_icon="🤖", # A robot emoji as the page icon
        layout="centered" # Center the content on the page
    )
    st.title("Streamlit MCP Client") # Set the main title of the app

    # Return a column object which can be used to place widgets
    return st.columns(1)[0]

2. Get User Inputs

The get_user_input() function allows users to provide their input, by creating a text area widget using st.text_area(). The height parameter ensures the input box is adequately sized, and the placeholder text prompts the user with specific instructions.

def get_user_input(column):
    """Handle transcript input methods and return the transcript text"""

    user_text = column.text_area(
        "Please enter the topics you’re interested in:",
        height=100,
        placeholder="Type it here..."
    )

    return user_text

3. Connect to MCP Servers

The create_mcp_server_dropdown() function facilitates the flexibility to choose from a range of MCP servers. It defines a dictionary of available MCP servers, mapping a label (like “deepwiki” or “huggingface”) to its corresponding server URL. Streamlit’s st.radio() widget displays these options as radio buttons for users to choose from. This function then returns both the selected server’s label and its URL, feeding into the subsequent step to generate responses.

def create_mcp_server_dropdown():
    # Define a list of MCP servers with their labels and URLs
    mcp_servers = {
        "deepwiki": "https://mcp.deepwiki.com/mcp",
        "huggingface": "https://huggingface.co/mcp"
    }

    # Create a radio button for selecting the MCP server
    selected_server = st.radio(
        "Select MCP Server",
        options=list(mcp_servers.keys()), 
        help="Choose the MCP server you want to connect to"
    )

    # Get the URL corresponding to the selected server
    server_url = mcp_servers[selected_server]

    return selected_server, server_url

4. Generate Responses

Earlier we see how to use client.responses.create() as a standard way to generate responses. The generate_response() function below extends this by passing several custom parameters:

  • model: choose the LLM model that fits your budget and purpose.
  • tools: determined by the user selected MCP server URL. In this case, since Hugging Face server requires user authentication, we also specify the API key in the tool configuration and display an error message when the key is not found.
  • input: this combines user’s query and tool-specific instructions
    to provide clear context for the prompt.

The user’s input is then sent to the LLM, which leverages the selected MCP server as an external tool to fulfill the request. And display the responses using the Streamlit info widget st.info(). Otherwise, it will return an error message using st.error() when no responses are produced.

from openai import OpenAI
import os

load_dotenv()
HF_API_KEY = os.getenv('HF_API_KEY') 

def generate_response(user_text, selected_server, server_url):
    """Generate response using OpenAI client and MCP tools"""
    client = OpenAI() 

    try:
        mcp_tool = {
            "type": "mcp",
            "server_label": selected_server, 
            "server_url": server_url,      
            "require_approval": "never",   
        }

        if selected_server == 'huggingface':
            if HF_API_KEY:
                mcp_tool["headers"] = {"Authorization": f"Bearer {HF_API_KEY}"}
            else:
                st.warning("Hugging Face API Key not found in .env. Some functionalities might be limited.")
            prompt_text = f"List some resources relevant to this topic: {user_text}?"
        else:
            prompt_text = f"Summarize codebase contents relevant to this topic: {user_text}?"

        response = client.responses.create(
            model="gpt-3.5-turbo", 
            tools=[mcp_tool],      
            input=prompt_text
        )

        st.info(
            f"""
            **Response:**
            {response.output_text}
            """
        )
        return response

    except Exception as e:
        st.error(f"Error generating response: {str(e)}") 
        return None

5. Define the Main Function

The final step is to create a main() function that chains all operations together. This function sequentially calls initialize_page(), get_user_input(), and create_mcp_server_dropdown() to set up the UI and collect user inputs. It then creates a condition to trigger generate_response() when the user clicks st.button("Generate Response"). Upon clicking, the function checks if user input exists, displays a spinner with st.spinner() to show progress, and returns the response. If no input is provided, the app displays a warning message instead of calling generate_response(), preventing unnecessary token usage and extra costs.

def main():
    # 1. Initialize the page layout
    main_column = initialize_page()

    # 2. Get user input for the topic
    user_text = get_user_input(main_column)

    # 3. Allow user to select the MCP server
    with main_column: # Place the radio buttons within the main column
        selected_server, server_url = create_mcp_server_dropdown()

    # 4. Add a button to trigger the response generation
    if st.button("Generate Response", key="generate_button"):
        if user_text:
            with st.spinner("Generating response..."): 
                generate_response(user_text, selected_server, server_url)
        else:
            st.warning("Please enter a topic first.")

Run Streamlit Application

Finally, a standard Python script entry point ensures that our main function is executed when the script is run.

if __name__ == "__main__":
    main()

Open your terminal or command prompt, navigate to the directory where you saved the file, and run:

streamlit run app.py

If you are developing your app locally, a local Streamlit server will spin up and your app will open in a new tab in your default web browser. Alternatively, if you are developing in a cloud environment, such as AWS JupyterLab, replace the default URL with this format: https://.studio..sagemaker.aws/jupyterlab/default/proxy/8501/. You may find the post “Build Streamlit apps in Amazon SageMaker AI Studio” helpful.

Lastly, you can find the code in our GitHub repository “mcp-streamlit-client” and explore your Streamlit MCP client by trying out different topics.


Take-Home Message

In our previous article, we introduced the MCP architecture and focused on the MCP server. Building on this foundation, we now explore implementing an MCP client with Streamlit to enhance the tool calling capabilities of remote MCP servers. This guide provides essential steps—from setting up your development environment and securing API keys, handling user input, connecting to remote MCP servers, and displaying AI-generated responses. To prepare this application for production, consider these next steps:

  • Asynchronous processing of multiple client requests
  • Caching mechanisms for faster response times
  • Session state management
  • User authentication and access management

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *