Redis, Valkey, or Dragonfly in 2026: how to actually decide
The benchmark-first approach to this choice is wrong. Start with licensing, then ecosystem, then performance.
In March 2024, Redis Labs changed the licence for Redis from BSD to the Server Side Public License and RSALv2. Within a month, a group of cloud vendors and contributors forked Redis 7.2.4 into Valkey under the Linux Foundation. By late 2024, AWS ElastiCache and Google Cloud Memorystore had switched their default engine from Redis to Valkey.
That left teams with three meaningful options: stay on Redis, migrate to Valkey, or adopt Dragonfly, a ground-up reimplementation of the Redis protocol with a different architecture and its own licence terms.
Most of the content covering this decision leads with benchmark numbers. Dragonfly is 4.5× faster than Valkey in high-concurrency benchmarks on large cloud instances. Valkey is 3–7% faster than Redis 7.x on most measurements. This is the wrong place to start. For most Redis workloads, neither number is relevant. The real decision is about licencing, ecosystem trajectory, and operational cost — and only then about performance.
What changed, and why it matters
The licence change affected Redis Labs's product relationship with cloud providers, not its relationship with most end users. AWS, Google, and others had been offering managed Redis as a service. That became untenable under SSPL terms, which prohibit providing the software as a hosted service without a commercial agreement. For teams self-hosting Redis for their own applications, the SSPL does not restrict use in any immediate way.
The risk is trajectory. A licence that changed once can change again. And the Valkey fork happened with serious backing: AWS, Google, Oracle, Ericsson, Snap, and others were among the initial Linux Foundation contributors. By late 2024, Valkey was the default engine on both major managed Redis services. That is a stronger endorsement than any benchmark figure.
Redis itself continued active development. Version 8.0, released under the same SSPL terms, added vector search, time series, and probabilistic data structures as built-in capabilities rather than optional modules. Redis is not standing still, and the software quality remains high. The open-source story is weaker than it was in 2023, but the product is not.
Redis, Valkey, and Dragonfly: what each option actually is
Before getting to numbers, it helps to be precise about what each option represents.
Redis is the original product, now at version 8.x, under SSPL v1 and RSALv2. For teams self-hosting for internal use, these licences impose no restriction and there is no additional cost. The software continues to receive new capabilities at a steady pace.
Valkey is a BSD-licensed fork of Redis 7.2.4, governed by the Linux Foundation, now at version 8.1. The API is essentially identical to Redis 7.x — the same data structures, the same commands, the same protocol. Valkey's 8.0 release added enhanced asynchronous I/O threading, producing modest throughput gains. The practical difference from Redis for most teams is the licence and the governance structure, not the software behaviour.
Dragonfly is not a fork. It is a complete reimplementation of the Redis and Memcached APIs, written from scratch with a multi-threaded, shared-nothing architecture. It speaks the Redis protocol, so existing client libraries connect to it exactly as they would to Redis or Valkey. But internally, it is entirely different software. Dragonfly operates under the Business Source License 1.1, which permits use for any purpose except offering it as a commercial managed service to others.
Why benchmarks are the wrong starting point
Dragonfly's multi-threaded architecture produces real throughput gains. On a 48-vCPU instance, Dragonfly reaches roughly 4.5× the throughput of Valkey in a high-concurrency benchmark. That number is not fabricated — it reflects a genuine architectural difference.
The question is whether that difference matters at the scale your application actually runs.
Redis and Valkey's single-threaded command processing becomes a bottleneck somewhere above 100,000 to 200,000 operations per second on a single node. On fast hardware with a low-latency network, the ceiling is higher. Look at what typical Redis deployments handle: session tokens for a mid-size web application, rate limiting counters for an API, a job queue for background processing, pub/sub for a few hundred subscribers. These workloads run at 1,000 to 20,000 ops/second. A single Redis instance on current hardware has orders of magnitude of headroom before the single-threaded core becomes a constraint.
Dragonfly's throughput advantage is meaningful when you are CPU-bound on a high-core-count instance and your p99 latency is being affected by Redis's command serialisation. If you are not there, you are paying for an architectural advantage you are not using — and taking on a less mature codebase, a more complex licence, and reduced module support in exchange.
You would know if you had hit Redis's ceiling. It shows up in monitoring as sustained CPU saturation on the Redis process under peak load, combined with latency degradation that doesn't improve when you increase memory or connection limits. If you are not seeing that pattern, you are not in Dragonfly's target workload.
“A Redis node handling session storage, rate limiting, and a job queue simultaneously is unlikely to be your throughput bottleneck — even in 2026.”
How to actually decide
Three questions, in order. The first two resolve the decision for most teams; the third only comes into play at a specific scale.
First: are you on a managed cloud service? If yes, the decision is largely made for you. AWS ElastiCache and Google Cloud Memorystore both defaulted new instances to Valkey in 2024. Existing Redis clusters will run through their support window, but the forward path is Valkey and the migration tooling is already mature. Unless you have a specific reason to resist the platform default, follow it.
Second: do you self-host and care about long-term licence stability? Then Valkey is the safer bet. SSPL does not restrict internal use today, but the licence trajectory matters if you are committing to this infrastructure for several years. Valkey's BSD licence and Linux Foundation governance are more stable than a proprietary vendor's licensing decisions. Migration from Redis 7.x to Valkey is low-cost — the API is identical, client libraries require no changes, and the behavioural differences are negligible for almost all workloads.
Third: have you actually hit Redis's single-threaded throughput ceiling? Only then is Dragonfly a strong candidate. At that scale, the throughput and memory efficiency gains are real and measurable. Verify the BSL terms with your legal team, particularly if you are deploying into any kind of shared internal infrastructure platform.
The specific scenarios
| Scenario | Best option | Primary reason |
|---|---|---|
| New project, cloud-hosted | Valkey (platform default) | ElastiCache and Memorystore already default to it |
| New project, self-hosted | Valkey | Open licence; drop-in API compatibility with Redis 7.x |
| Existing Redis 7.x, no performance problems | Valkey | Low migration cost; better licence trajectory |
| High-concurrency, CPU-bound on large instance | Dragonfly | Multi-threaded architecture is the right fit; verify BSL terms first |
| Need Redis 8 AI features (vector search, time series) | Redis | These capabilities are not yet at full parity in Valkey 8.1 |
| Internal shared infra platform provisioning caches for teams | Valkey or Redis | Dragonfly BSL may cover this pattern — get legal clarity first |
| Managed-service vendor offering Redis-compatible hosting | Valkey or Redis commercial | Both Dragonfly BSL and Redis SSPL restrict this use case |
The migration is less work than it looks
Migrating from Redis to Valkey has an accurate reputation for being straightforward. A few things are worth checking before you cut over.
Client libraries require no changes. ioredis, go-redis, jedis, redis-py, and every standard Redis client library connect to Valkey without modification, because Valkey implements the same protocol. Only the connection string changes — host and port, nothing else.
Persistence format compatibility has a direction. Valkey can read Redis RDB and AOF files up to Redis 7.2.x. If you are on Redis 8.x and relying on RDB backups, check version compatibility before attempting a direct file import. The safe approach is to flush the existing data and reload from your application or a backup, rather than importing the persistence file across versions.
Modules are the main friction point. Redis modules (RedisJSON, RediSearch, RedisTimeSeries, RedisBloom) are not compatible with Valkey. Valkey 8.1 includes built-in implementations of JSON and search derived from community contributions, but the configuration syntax and some behaviours differ from the module versions. If your application depends on advanced module features, test those specific paths against Valkey's built-ins before committing to the migration. Dragonfly has its own built-in equivalents and does not support Redis modules at all.
The bottom line
Two years after the licence change, the ecosystem has settled enough to make a clear decision. Valkey is real software with real momentum: AWS and Google are both shipping it in production at scale, and version 8.1 shows it is tracking active development, not just maintenance.
For most teams, the path is straightforward: migrate from Redis 7.x to Valkey when it is practical to do so, primarily for licence hygiene. The API is the same. The behaviour is nearly identical. The long-term governance trajectory is better.
Dragonfly earns a serious evaluation when you have actually hit Redis's throughput ceiling — not as a preemptive upgrade for a workload that tops out at 15,000 ops/second. Leading with its benchmark numbers to justify adoption is like buying a 40-seat coach because it can carry more passengers than your current car.
Redis itself remains defensible if you need its newer AI-native features and your legal team is comfortable with SSPL for self-hosted internal use. The open-source story has changed, but the software quality has not.
The 2024 fork was disruptive. The 2026 picture is clearer.
Frequently asked questions
Related reading
Rate limiting in production: why the algorithm you chose is probably wrong for your workload
Most rate limiting failures aren't implementation errors. They come from picking an algorithm whose properties don't match the actual traffic shape. Here's a workload-first framework for making the right choice.
gRPC vs REST for internal services: the decision you're probably making too early
The gRPC vs REST debate isn't about performance. It's about when the cost of schema enforcement is worth paying — and most teams reach for gRPC before they've hit the problems it actually solves.