A team I worked with once had 247 entries in their feature flag dashboard. Twelve of them were actually features. The rest were:
- The stripe webhook secret (wrong)
- The list of supported currencies (wrong)
- A user’s preferred timezone (very wrong)
- The maximum file upload size (debatable)
- Whether a specific tenant had access to the beta program (also debatable)
- A Christmas-themed banner from 2023 (just sad)
The dashboard was unusable. Search returned 40 results for any reasonable query. Toggling something at random was an actively dangerous operation, because nobody could tell at a glance whether they were turning off a feature or rotating a credential.
This is what happens when “feature flag” becomes a generic word for runtime-changeable value. It is, eventually, what happens to every team that doesn’t draw the lines carefully early.
This post is about the three categories — feature flags, config values, and user settings — and how to tell them apart without a six-month archaeological survey.
The three things that look the same at the boundary
From the perspective of code calling getValue("foo"), all three look like the same operation. That’s the trap. Read access is identical; everything else is different.
| Feature flag | Config value | User setting | |
|---|---|---|---|
| Who changes it | Engineer / PM / on-call | Engineer | The user |
| How often | Frequently during a rollout, never after | Rarely (release-shaped) | Whenever they feel like it |
| Lifetime | Days to months, then deleted | Years | The user’s account |
| Per-user? | Sometimes (targeting) | No | Always |
| Auditable change | Yes, who/when/what | Yes-ish | Trivially (it’s their data) |
| Rollback model | Flip back instantly | Re-deploy | User changes their mind |
| Scope | One codepath | One service | One account |
| Owner | A team | A team | The end user |
The distinguishing question isn’t “can this value change at runtime” — all three can. The distinguishing questions are: who changes it, how often, and for how long does the change matter?
Feature flag
A feature flag is a value attached to a codepath, owned by a team, intended to be temporary. It exists to decouple deployment from release, run a rollout, gate an experiment, or serve as a kill switch.
The defining property is impermanence. A healthy feature flag has a death date. It lives for the duration of a rollout — days, weeks, occasionally months — and then it gets removed from code. The flag’s reason for existing was the transition, not the state.
Examples:
enable-new-checkout-v3(rollout)kill-switch-recommendations(incident lever)experiment-pricing-page-headline-2026q2(experiment)
If a flag has been in your code for more than 12 months and isn’t a kill switch, it has stopped being a flag. It is now a config value masquerading as a flag, and you should either delete it or move it.
Config value
A config value is a piece of system configuration, owned by a service, intended to be durable. It tunes behavior. It changes when an engineer decides it should. It lives until the underlying feature is removed.
The defining property is permanence. A config value is part of how the system runs. It might change on a release boundary, or in response to a capacity event, but it doesn’t get flipped. Nobody is going to remove MAX_UPLOAD_SIZE_MB next quarter; that’s a knob the system will have for as long as the system has uploads.
Examples:
max-upload-size-mbdefault-cache-ttl-secondsallowed-payment-currenciess3-bucket-region
A subset of config values are secrets (API keys, signing secrets, tokens). Those are still config — but their access pattern, audit requirements, and rotation story are strict enough that they should live in a secret manager, not your config store, and definitely not your feature flag dashboard. If you’re putting Stripe webhook secrets next to your enable-dark-mode flag, the category error is the security incident.
User setting
A user setting is a piece of the user’s own state, owned by the user, intended to last as long as their account. It changes when they change it. It is per-user by definition.
The defining property is ownership. The user is the source of truth. Your job is to store and serve the value back, not to decide it.
Examples:
preferred-timezoneemail-notification-digest-frequencytheme: dark | light | autodefault-org-on-login
These should live in your database, on the user record (or an associated settings table), with the same access controls as the rest of the user’s data.
A working decision tree
When the team can’t tell what something is, ask, in order:
- Is the value owned by the end user? → User setting. Store it on the user record. Done.
- Is the value going to be deleted within ~12 months? → Feature flag. Put it in your flag system, give it an owner and an expected expiry.
- Does the value contain a secret, a credential, or a token? → Secret manager. Not the flag system. Not the config store. The secret manager.
- Will this value still exist, with a meaningful name, in two years? → Config value. Service config, environment-tagged, change-controlled.
- None of the above? → You probably don’t need a runtime-changeable value at all. Hardcode it. You can always promote it later if reality disagrees.
The 12-month rule is rough but practical. Anything you can imagine deleting on a foreseeable horizon is a flag. Anything you’d be embarrassed to delete is a config. Anything the user adjusts is a setting.
Common miscategorizations (and what each one costs)
Config value stored as a flag. A team puts max-upload-size-mb: 50 in their flag dashboard “because it’s nice to be able to change it without a deploy.” Six months later it has 14 environment-specific overrides, three of which contradict each other, and nobody has audited which is in effect for any given tenant. Cost: a slow degradation of trust in the flag dashboard, and an outage when a stale override gets applied to the wrong region.
User setting stored as a flag. A team adds preferred-timezone to the flag system, targeted by user ID. Now toggling timezones requires the flag service. The flag service has a brief outage. Every user’s timezone defaults to UTC. The support inbox sets a record. Cost: you’ve turned a per-user data lookup into a service dependency for a feature that was supposed to be one row in the users table.
Secret stored as a flag. Self-explanatory. Now the people with flag-edit permissions can read your Stripe webhook secret. Some of them are interns. Cost: a SOC 2 finding, at minimum. A real incident, eventually.
Flag stored as a config value. A team gates a new feature behind ENABLE_NEW_CHECKOUT=true in their config file. Disabling it requires a deploy. They’ve reinvented Level 2 of the maturity model on top of a working flag system. Cost: incident mitigation drops back from seconds to a build cycle.
Feature flag that should have been deleted. A flag has been at 100% for 14 months. Nobody removes it. The “off” branch is dead code that subtly rots until the day someone deletes it and discovers it wasn’t dead, because something in a forgotten cron job was still hitting it. Cost: a Sunday afternoon incident in honor of a feature that shipped two years ago.
The pragmatic pattern: same UI, different stores
You probably don’t actually want three separate dashboards — your team will route around it and put everything in whichever one is easiest. What you want is a single pane of glass that knows the difference and applies different rules per category.
That looks roughly like:
- Feature flags. First-class flag system. Targeting, rollouts, owners, expiry dates, audit log, kill-switch semantics, default-safe behavior on read failure.
- Config values. Source-controlled config (or a config service with the same change-management story). Change requires review. Environment-scoped. Versioned.
- User settings. Database. Same migrations, backups, access controls as the rest of your user data. Read by your application code, not a flag SDK.
- Secrets. A secret manager — Vault, AWS Secrets Manager, Doppler, whatever fits. Strict access, rotation tooling, never adjacent to non-secret values.
The discipline is at the categorization boundary. If your flag system is the easiest place for engineers to put runtime values, they will put everything there, and you will end up at 247 entries with twelve real features. The fix is to make the right home for each category at least as easy to use as the wrong one.
A small test you can run today
Open your flag dashboard. Pick ten flags at random. For each, answer in one sentence:
- What does the “off” state of this flag mean?
- When will this flag be deleted?
- Who owns it?
If you can’t answer all three for at least eight of the ten, you don’t have a feature flag system anymore. You have a config dump with a UI. The fix isn’t another tool — it’s drawing the lines, deleting what doesn’t belong, and routing each category to its proper home.
Most of the value of a feature flag dashboard comes from what isn’t in it. Keep it that way and the dashboard stays useful for the thing it’s actually for: shipping features safely and turning them off when they break.
ShipSilently is a managed feature flag service built around the things on the left side of the table — temporary, codepath-scoped, owner-tracked, expiry-aware. We have opinions about the other categories too, but we don’t pretend our flag dashboard is the right home for them. Try it free.