Home » Introducing Server-Sent Events in Python | Towards Data Science

Introducing Server-Sent Events in Python | Towards Data Science

a developer, I’m always looking for ways to make my applications more dynamic and interactive. Users today expect real-time features, such as live notifications, streaming updates, and dashboards that refresh automatically. The tool that often comes to mind for web developers when considering these types of applications is WebSockets, and it is incredibly powerful. 

There are times, though, when WebSockets can be overkill, and their full functionality is often not required. They provide a complex, bi-directional communication channel, but many times, all I need is for the server to push updates to the client. For these common scenarios, a more straightforward and elegant solution that is built right into modern web platforms is known as Server-Sent Events (SSE).

In this article, I’m going to introduce you to Server-Sent Events. We’ll discuss what they are, how they compare to WebSockets, and why they are often the perfect tool for the job. Then, we’ll dive into a series of practical examples, using Python and the FastAPI framework to build real-time applications that are surprisingly simple yet powerful.

What are Server-Sent Events (SSE)?

Server-Sent Events is a web technology standard that allows a server to push data to a client asynchronously once an initial client connection has been established. It provides a one-way, server-to-client stream of data over a single, long-lived HTTP connection. The client, typically a web browser, subscribes to this stream and can react to the messages it receives.

Some key aspects of Server-Sent Events include:

  • Simple Protocol. SSE is a straightforward, text-based protocol. Events are just chunks of text sent over HTTP, making them easy to debug with standard tools like curl.
  • Standard HTTP. SSE works over regular HTTP/HTTPS. This means it’s generally more compatible with existing firewalls and proxy servers.
  • Automatic Reconnection. This is a killer feature. If the connection to the server is lost, the browser’s EventSource API will automatically try to reconnect. You get this resilience for free, without writing any extra JavaScript code.
  • One-Way Communication. SSE is strictly for server-to-client data pushes. If you need full-duplex, client-to-server communication, WebSockets are the more appropriate choice.
  • Native Browser Support. All modern web browsers have built-in support for Server-Sent Events (SSE) through the EventSource interface, eliminating the need for client-side libraries.

Why SSE Matters/Common Use Cases

The primary advantage of SSE is its simplicity. For a large class of real-time problems, it provides all the necessary functionality with a fraction of the complexity of WebSockets, both on the server and the client. This means faster development, easier maintenance, and fewer things that can go wrong.

SSE is a perfect fit for any scenario where the server needs to initiate communication and send updates to the client. For example …

  • Live Notification Systems. Pushing notifications to a user when a new message arrives or an important event occurs.
  • Real-Time Activity Feeds. Streaming updates to a user’s activity feed, similar to a Twitter or Facebook timeline.
  • Live Data Dashboards. Sending continuous updates for stock tickers, sports scores, or monitoring metrics to a live dashboard.
  • Streaming Log Outputs. Displaying the live log output from a long-running background process directly in the user’s browser.
  • Progress Updates. Showing the real-time progress of a file upload, a data processing job, or any other long-running task initiated by the user.

That’s enough theory; let’s see just how easy it is to implement these ideas with Python.

Setting up the Development Environment

We will utilise FastAPI, a modern and high-performance Python web framework. Its native support for asyncio and streaming responses makes it a perfect match for implementing Server-Sent Events. You’ll also need the Uvicorn ASGI server to run the application.

As usual, we’ll set up a development environment to keep our projects separate. I suggest using MiniConda for this, but feel free to use whichever tool you’re accustomed to.

# Create and activate a new virtual environment
(base) $ conda create -n sse-env python=3.13 -y
(base) $ activate sse-env

Now, install the external libraries we need.

# Install FastAPI and Uvicorn
(sse-env) $ pip install fastapi uvicorn

That’s all the setup we need. Now, we can start coding.

Code Example 1 — The Python Backend. A Simple SSE Endpoint

Let’s create our first SSE endpoint. It will send a message with the current time to the client every second.

Create a file named app.py and type the following into it.

from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from fastapi.middleware.cors import CORSMiddleware
import time

app = FastAPI()

# Allow requests from http://localhost:8080 (where index.html is served)
app.add_middleware(
    CORSMiddleware,
    allow_origins=["http://localhost:8080"],
    allow_methods=["GET"],
    allow_headers=["*"],
)

def event_stream():
    while True:
        yield f"data: The time is {time.strftime('%X')}nn"
        time.sleep(1)

@app.get("/stream-time")
def stream():
    return StreamingResponse(event_stream(), media_type="text/event-stream")

I hope you agree that this code is straightforward.

  1. We define an event_stream() function. This loop repeats endlessly, producing a string every second.
  2. The yielded string is formatted according to the SSE spec: it must start with data: and end with two newlines (nn).
  3. Our endpoint /stream-time returns a StreamingResponse, passing our generator to it and setting the media_type to text/event-stream. FastAPI handles the rest, keeping the connection open and sending each yielded chunk to the client.

To run the code, don’t use the standard Python app.py command as you would normally. Instead, do this.

(sse-env)$ uvicorn app:app --reload

INFO:     Will watch for changes in these directories: ['/home/tom']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [4109269] using WatchFiles
INFO:     Started server process [4109271]
INFO:     Waiting for application startup.
INFO:     Application startup complete.

Now, type this address into your browser …

http://127.0.0.1:8000/stream-time

… and you should see something like this.

Image by Author

The screen should display an updated time record every second.

Code example 2. Real-Time System Monitoring Dashboard

In this example, we will monitor our PC or laptop’s CPU and memory usage in real time.

Here is the app.py code you need.

import asyncio
import json
import psutil
from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse, StreamingResponse
from fastapi.middleware.cors import CORSMiddleware
import datetime

# Define app FIRST
app = FastAPI()

# Then add middleware
app.add_middleware(
    CORSMiddleware,
    allow_origins=["http://localhost:8080"],
    allow_methods=["GET"],
    allow_headers=["*"],
)

async def system_stats_generator(request: Request):
    while True:
        if await request.is_disconnected():
            print("Client disconnected.")
            break

        cpu_usage = psutil.cpu_percent()
        memory_info = psutil.virtual_memory()

        stats = {
            "cpu_percent": cpu_usage,
            "memory_percent": memory_info.percent,
            "memory_used_mb": round(memory_info.used / (1024 * 1024), 2),
            "memory_total_mb": round(memory_info.total / (1024 * 1024), 2)
        }

        yield f"data: {json.dumps(stats)}nn"
        await asyncio.sleep(1)

@app.get("/system-stats")
async def stream_system_stats(request: Request):
    return StreamingResponse(system_stats_generator(request), media_type="text/event-stream")

@app.get("/", response_class=HTMLResponse)
async def read_root():
    with open("index.html") as f:
        return HTMLResponse(content=f.read())

This code constructs a real-time system monitoring service using the FastAPI web framework. It creates a web server that continuously tracks and broadcasts the host machine’s CPU and memory usage to any connected web client.

First, it initialises a FastAPI application and configures Cross-Origin Resource Sharing (CORS) middleware. This middleware is a security feature that is explicitly configured here to allow a web page served from http://localhost:8080 to make requests to this server, which is a common requirement when the frontend and backend are developed separately.

The core of the application is the system_stats_generator asynchronous function. This function runs in an infinite loop, and in each iteration, it uses the psutil library to fetch the current CPU utilisation percentage and detailed memory statistics, including the percentage used, megabytes used, and total megabytes. It packages this information into a dictionary, converts it to a JSON string, and then yields it in the specific “text/event-stream” format (data: …nn). 

The use of asyncio.sleep(1) introduces a one-second pause between updates, preventing the loop from consuming excessive resources. The function is also designed to detect when a client has disconnected and gracefully stop sending data for that client.

The script defines two web endpoints. The @app.get(“/system-stats”) endpoint creates a StreamingResponse that initiates the system_stats_generator. When a client makes a GET request to this URL, it establishes a persistent connection, and the server begins streaming the system stats every second. The second endpoint, @app.get(“/”), serves a static HTML file named index.html as the main page. This HTML file would typically contain the JavaScript code needed to connect to the /system-stats stream and dynamically display the incoming performance data on the web page.

Now, here is the updated (index.html) front-end code.




    
    System Monitor
    


    

Memory Usage

0% (0 / 0 MB)

Run the app using Uvicorn, as we did in Example 1. Then, in a separate command window, type the following to start a Python server.

python3 -m http.server 8080

Now, open the URL http://localhost:8080/index.html in your browser, and you will see the output, which should update continuously.

Image by Author

Code example 3 — Background task progress bar

In this example, we initiate a task and display a bar indicating the task’s progress.

Updated app.py

import asyncio
import json
import psutil
from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse, StreamingResponse
from fastapi.middleware.cors import CORSMiddleware
import datetime

# Define app FIRST
app = FastAPI()

# Then add middleware
app.add_middleware(
    CORSMiddleware,
    allow_origins=["http://localhost:8080"],
    allow_methods=["GET"],
    allow_headers=["*"],
)

async def training_progress_generator(request: Request):
    """
    Simulates a long-running AI training task and streams progress.
    """
    total_epochs = 10
    steps_per_epoch = 100

    for epoch in range(1, total_epochs + 1):
        # Simulate some initial processing for the epoch
        await asyncio.sleep(0.5)

        for step in range(1, steps_per_epoch + 1):
            # Check if client has disconnected
            if await request.is_disconnected():
                print("Client disconnected, stopping training task.")
                return

            # Simulate work
            await asyncio.sleep(0.02)

            progress = (step / steps_per_epoch) * 100
            simulated_loss = (1 / epoch) * (1 - (step / steps_per_epoch)) + 0.1

            progress_data = {
                "epoch": epoch,
                "total_epochs": total_epochs,
                "progress_percent": round(progress, 2),
                "loss": round(simulated_loss, 4)
            }

            # Send a named event "progress"
            yield f"event: progressndata: {json.dumps(progress_data)}nn"

    # Send a final "complete" event
    yield f"event: completendata: Training complete!nn"

@app.get("/stream-training")
async def stream_training(request: Request):
    """SSE endpoint to stream training progress."""
    return StreamingResponse(training_progress_generator(request), media_type="text/event-stream")

@app.get("/", response_class=HTMLResponse)
async def read_root():
    """Serves the main HTML page."""
    with open("index.html") as f:
        return HTMLResponse(content=f.read())

The updated index.html code is this.




    
    Live Task Progress
    


    
    
    

Stop your existing uvicorn and Python server processes if they’re still running, and then restart both.

Now, when you open the index.html page, you should see a screen with a button. Pressing the button will start a dummy task, and a moving bar will display the task progress.

Image by Author

Code Example 4— A Real-Time Financial Stock Ticker

For our final example, we will create a simulated stock ticker. The server will generate random price updates for several stock symbols and send them using named events, where the event name corresponds to the stock symbol (e.g., event: AAPL, event: GOOGL). This is a powerful pattern for multiplexing different kinds of data over a single SSE connection, allowing the client to handle each stream independently.

Here is the updated app.py code you’ll need.

import asyncio
import json
import random
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from fastapi.middleware.cors import CORSMiddleware

# Step 1: Create app first
app = FastAPI()

# Step 2: Add CORS to allow requests from http://localhost:8080
app.add_middleware(
    CORSMiddleware,
    allow_origins=["http://localhost:8080"],
    allow_methods=["GET"],
    allow_headers=["*"],
)

# Step 3: Simulated stock prices
STOCKS = {
    "AAPL": 150.00,
    "GOOGL": 2800.00,
    "MSFT": 300.00,
}

# Step 4: Generator to simulate updates
async def stock_ticker_generator(request: Request):
    while True:
        if await request.is_disconnected():
            break

        symbol = random.choice(list(STOCKS.keys()))
        change = random.uniform(-0.5, 0.5)
        STOCKS[symbol] = max(0, STOCKS[symbol] + change)

        update = {
            "symbol": symbol,
            "price": round(STOCKS[symbol], 2),
            "change": round(change, 2)
        }

        # Send named events so the browser can listen by symbol
        yield f"event: {symbol}ndata: {json.dumps(update)}nn"
        await asyncio.sleep(random.uniform(0.5, 1.5))

# Step 5: SSE endpoint
@app.get("/stream-stocks")
async def stream_stocks(request: Request):
    return StreamingResponse(stock_ticker_generator(request), media_type="text/event-stream")

And the updated index.html




    
    Live Stock Ticker
    


    

    

Stop then restart the uvicorn and Python processes as before. This time, when you open http://localhost:8080/index.html in your browser, you should see a screen like this, which will continually update the dummy prices of the three stocks.

Image by Author

Summary

In this article, I demonstrated that for many real-time use cases, Server-Sent Events offer a simpler alternative to WebSockets. We discussed the core principles of SSE, including its one-way communication model and automatic reconnection capabilities. Through a series of hands-on examples using Python and FastAPI, we saw just how easy it is to build powerful real-time features. We covered:

  • A simple Python back-end and SSE endpoint
  • A live system monitoring dashboard streaming structured JSON data.
  • A real-time progress bar for a simulated long-running background task.
  • A multiplexed stock ticker using named events to manage different data streams.

Next time you need to push data from your server to a client, I encourage you to pause before reaching for WebSockets. Ask yourself if you truly need bi-directional communication. If the answer is no, then Server-Sent Events are likely the more straightforward, faster, and more robust solution you’ve been looking for.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *