SSE vs WebSockets vs polling: the 2026 decision guide
When SSE is the right default, when WebSockets earn their complexity, and why polling never went away.
Server-Sent Events, WebSockets, and polling are three approaches to the same problem: how does a server push data to a client without waiting for the client to ask? For years the SSE vs WebSockets vs polling comparison came up mainly in the context of stock tickers and chat applications. That changed when LLM APIs went mainstream and streaming token responses became the normal way to consume AI output. Most engineers today encounter this choice because they are integrating OpenAI, Anthropic, or Gemini, not because they are building a trading terminal. Understanding why those APIs all chose SSE reveals most of what you need to decide which protocol to reach for.
The three protocols
Server-Sent Events (SSE) are a one-way channel from server to client over a persistent HTTP connection. The server keeps the response open and pushes data formatted as "data: <payload>" followed by two newlines. The browser's native EventSource API handles reconnection automatically and sends a Last-Event-ID header on reconnect, so the server can resume the stream without data loss. SSE is part of the HTML standard and runs over HTTP/1.1 or HTTP/2.
WebSockets open a persistent, full-duplex TCP connection after an HTTP upgrade handshake. Both server and client can send messages at any time, in either direction, without per-message HTTP overhead. The protocol is binary-capable and carries no built-in message framing; you define the format yourself.
Polling is the client making periodic HTTP requests, with the server responding immediately whether or not there is new data. Long-polling is a variant where the server holds the connection open until data arrives, then closes it. Polling is the oldest of the three and more often correct than its reputation suggests.
| Protocol | Traffic direction | Infrastructure overhead | Good for | Avoid when |
|---|---|---|---|---|
| SSE | Server to client only | Low (plain HTTP) | LLM streaming, live feeds, push notifications | Client also needs to send a stream back |
| WebSockets | Bidirectional | Higher (stateful; needs WS support at load balancer) | Games, collaborative editors, live trading | Serverless, simple read-only feeds |
| Polling | Client pull | Lowest (stateless HTTP) | Low-frequency status, serverless, async job results | Update frequency exceeds one per 5 to 10 seconds |
Where SSE is the right choice
SSE was designed for unidirectional push: the server generates a stream, the client receives it. Producing tokens from an LLM and sending them to a browser as they arrive is exactly that pattern. The client sends one request and reads an indefinitely long response. There is nothing to send back while the generation is in flight, so a bidirectional connection adds complexity without adding value.
Beyond AI response streaming, SSE is the right default for:
- Live dashboards where state flows from server to browser, such as monitoring panels, log tails, and CI build output
- Push notifications for asynchronous processes: job queue progress, order status updates, invoice processing confirmation
- Real-time feeds where the server owns the state and clients subscribe, such as sports scores or deployment status pages
SSE also has an operational advantage that comparison articles tend to understate: it uses plain HTTP. Most load balancers, reverse proxies, and CDNs handle it without any special configuration. WebSocket connections require the intermediary to support the HTTP upgrade handshake. SSE just works through Nginx, Cloudflare, and standard API gateways with no extra setup.
Where WebSockets are the right choice
WebSockets win on bidirectionality. If the client needs to push a continuous stream to the server, not a single request but an ongoing flow, a WebSocket connection is the right tool.
The cases that genuinely need WebSockets:
- Multiplayer games, where position and input data flows from each client to the server and back at tens to hundreds of messages per second
- Collaborative editors where every keystroke or cursor move from any participant must be relayed to all others with minimal latency
- Live trading terminals that both display market data and send orders over the same connection
- Audio and video signalling, though WebRTC handles the actual media; WebSockets typically carry the signalling channel
The tradeoff worth knowing before committing: WebSocket connections are stateful and long-lived. A horizontally scaled service needs a coordination layer so that a message produced by server instance A reaches clients connected to instances B and C. Redis Pub/Sub is the most common solution. SSE reconnects are plain HTTP requests and can hit any instance; the statefulness lives at the event store, not the connection.
Where polling is still correct
Before reaching for SSE or WebSockets, ask: how often does the underlying data actually change? If the answer is once every several minutes, opening a persistent connection for the duration of a session wastes a slot on a stream that is silent most of the time.
Polling is the right choice when:
- Data changes at a predictable low frequency: order status, async job position, webhook delivery confirmation
- The client runs in a serverless environment where persistent connections do not map to the execution model, such as AWS Lambda, Cloudflare Workers without Durable Objects, or standard Vercel functions
- The client may be dormant for long stretches, such as a background browser tab or a mobile app that checks when the user opens it
A poll every 15 to 60 seconds is cheaper to build, cheaper to operate, and produces the same user experience for anything that does not change in real time. Long-polling sits between the two and is rarely worth the added complexity; if the latency budget is tight enough that the poll interval matters, SSE is usually simpler.
SSE vs WebSockets vs polling: three questions to decide
If you are choosing a protocol from scratch, three questions cover most of the decision:
- Does the client need to send a continuous stream of data to the server, not a single request but an ongoing flow? If yes: WebSockets.
- How often does the underlying data actually change? If less than roughly once every ten seconds: polling. If frequently: continue.
- Are you on HTTP/2? If yes: SSE. If not, and you need many simultaneous real-time connections from one page, check your connection budget. SSE still usually works.
Most real-time product features are unidirectional. The fraction of applications that genuinely need WebSockets is smaller than most engineers assume on first encounter. If bidirectionality is not a core requirement of the feature, SSE is the right default.
WebTransport is worth a mention for completeness. It runs over HTTP/3 (QUIC) and supports bidirectional streams natively. It is production-ready in Chrome and Edge as of 2025 and is where major browser vendors are heading for latency-sensitive applications. For most teams building today, it is not yet a practical replacement for WebSockets across the full browser matrix, but it is worth tracking.
What an SSE endpoint looks like in practice
An SSE endpoint in Express is a few lines:
app.get('/stream', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.flushHeaders();
const send = (data) => res.write('data: ' + JSON.stringify(data) + '\n\n');
const interval = setInterval(() => send({ ts: Date.now() }), 1000);
req.on('close', () => clearInterval(interval));
});The client side needs only the native EventSource API:
const es = new EventSource('/stream');
es.onmessage = ({ data }) => console.log(JSON.parse(data));
es.onerror = () => console.error('Connection dropped; EventSource will retry automatically');One practical note: the native EventSource API only supports GET requests and does not allow custom headers on the initial connection. If you need to pass a Bearer token, either include it as a short-lived query parameter or use a fetch-based library such as @microsoft/fetch-event-source, which supports POST requests and custom headers with the same text/event-stream format.
LLM streaming APIs changed the context for this question. Before AI responses became ubiquitous, SSE vs WebSockets vs polling was a specialist concern. Now it comes up for anyone shipping a product with a generation step. SSE is the right default because most real-time product features push data one way. Reach for WebSockets when the client generates its own stream. Reach for polling when "real-time" means updated within a minute or two. Each protocol earns its place in specific situations, and the situation usually makes the choice clear.
Frequently asked questions
Related reading
Server-Sent Events vs WebSockets in 2026: when each actually wins
WebSockets are the wrong default for most real-time features. HTTP/2 changed the SSE economics years ago, enterprise proxies regularly break WebSocket upgrades, and SSE handles reconnection natively.
WebAuthn in 2026: the production explainer for engineers who missed the passkey shift
Most WebAuthn guides were written in 2019, before passkeys existed. This covers the 2026 picture: the concepts that trip teams up, the production decisions the spec doesn't make, and where to start.
You're probably using the wrong Postgres index on your events table
Every SaaS team has an events table. Most put a B-tree index on it and move on. At tens of millions of rows, that default stops being the right call — here's what to reach for instead.