When we talk about feature flag performance, we’re really talking about a fundamental architectural question: where does the evaluation happen?
Traditional feature flag services follow a simple model — your application makes a network request to a centralized API, receives the flag state, and continues execution. This works, but it introduces latency that compounds across every evaluated flag in every request.
At ShipSilently, we took a different approach. We moved flag evaluation to the edge.
The Latency Problem
Consider a typical request flow in a cloud-hosted application:
- User request arrives at your server
- Your application calls the feature flag service API
- The flag service evaluates the rules and returns a result
- Your application continues processing
Even with an optimized flag service responding in 20–50ms, those milliseconds add up. If you’re evaluating three flags per request, you’ve added 60–150ms of latency before your application even starts doing real work.
For latency-sensitive applications — e-commerce checkout flows, real-time dashboards, API endpoints — this overhead is unacceptable.
Evaluation at the Edge
Edge computing fundamentally changes this architecture. Instead of making a network call to a centralized service, the flag rules are distributed to edge nodes around the world. When your application evaluates a flag, it reads from a local cache — no network round-trip required.
Here’s what this looks like in practice with ShipSilently:
import { initialize } from '@shipsilently/node';
const client = initialize(process.env.SHIP_API_KEY);
// Evaluates locally — no network call
const showNewUI = await client.getFlag('redesigned-dashboard', {
userId: user.id,
plan: user.plan,
});
The getFlag call reads from a locally synchronized rule set. Evaluation happens in-process, typically completing in under 1ms.
How Synchronization Works
The natural question is: how do the rules stay up to date?
ShipSilently uses a three-layer synchronization architecture:
- Initial bootstrap — When your application starts, the SDK fetches the complete rule set and caches it in memory.
- Server-Sent Events (SSE) — A persistent connection receives real-time updates as rules change in the dashboard.
- Stale-while-revalidate — If the SSE connection drops, the SDK continues serving cached rules while reconnecting in the background.
This means flag changes propagate to all edge nodes within 500ms, while evaluation latency remains at sub-1ms regardless of network conditions.
Real-World Impact
We measured the performance impact across our early-access customers:
| Metric | Centralized API | Edge Evaluation |
|---|---|---|
| p50 evaluation time | 22ms | 0.3ms |
| p99 evaluation time | 89ms | 1.2ms |
| Added latency per request (3 flags) | 66–267ms | 0.9–3.6ms |
For a checkout flow that evaluates five flags, the difference between 100ms+ and under 5ms is the difference between a noticeable delay and an invisible operation.
When Edge Evaluation Matters Most
Edge-based evaluation isn’t just a performance optimization — it changes what’s architecturally possible:
- Serverless functions where cold starts already add latency and every millisecond counts
- High-throughput APIs serving thousands of requests per second where flag evaluation can’t become a bottleneck
- Client-side SDKs where user-perceived latency directly impacts conversion and engagement
- Multi-region deployments where centralized flag services create uneven performance across geographies
The Infrastructure Advantage
Building an edge-native feature flag platform isn’t trivial. It requires a globally distributed data layer, a reliable real-time synchronization protocol, and careful attention to consistency guarantees.
At ShipSilently, we built on Cloudflare’s global network — 300+ points of presence, sub-millisecond KV reads, and Durable Objects for coordination. The result is a feature flag platform where performance is not a trade-off.
Try ShipSilently free and experience sub-millisecond flag evaluation. Create your account.