Vault

HashiCorp Vault is a dedicated secrets management tool that stores credentials, API keys, certificates, and other sensitive values in an encrypted, access-controlled store. Applications request the secrets they need at runtime using the Vault API instead of reading them from files or environment variables. Every access request is logged, and secrets can be automatically renewed or replaced on a schedule without redeploying the application.

✅ When is it appropriate

Vault is suitable if most of the following apply:

  • the application runs in production and a breach that exposes a database password or API key would cause real harm to users or the business
  • a security audit or regulatory requirement such as SOC 2, PCI-DSS, or HIPAA requires a log showing which service accessed which secret and when
  • database credentials or cloud API keys must be replaced on a schedule or immediately after a suspected breach without redeploying the application
  • multiple services or teams need access to secrets but each must be restricted to only the secrets it is allowed to read
  • the application runs on Kubernetes or a cloud platform that supports Vault's native authentication methods such as the Kubernetes auth backend or AWS IAM

Vault acts as the single source of truth for all secrets. When an application starts, it authenticates to Vault using its identity (a Kubernetes service account, an AWS IAM role, or an AppRole) and receives a short-lived token. It uses that token to fetch only the secrets it is allowed to read. If a secret is compromised, it can be revoked in Vault and a new one issued without touching any code or restarting any service.

❌ When is it NOT appropriate

Vault may not be ideal if:

  • the project has fewer than a handful of secrets that never change, where a platform-injected environment variable is sufficient
  • there is no one on the team available to deploy, configure, unseal, and monitor a Vault cluster as a piece of production infrastructure
  • the application is a short-lived prototype or internal tool with no user data and no compliance requirements
  • the cloud platform already provides a managed secrets service such as AWS Secrets Manager or Azure Key Vault that covers all current requirements without self-hosting

Vault requires running at least one always-available server process, initialising and unsealing the cluster, defining policies for every application that needs secrets, and monitoring the cluster for availability. For a project with two environment variables and one developer, this infrastructure cost is higher than the security benefit it provides.

👍 Advantages

  • all secrets are stored encrypted in one place; no application ever reads a secret from a file or an environment variable set by a human
  • Vault can generate a new database password, hand it to the application, and schedule its expiry automatically without any manual step
  • every read and write of every secret is written to an audit log with the identity of the caller, the time, and the outcome
  • a policy controls which secrets each application or team can access; a compromised service cannot read secrets it was never granted access to
  • Vault integrates natively with Kubernetes, AWS, GCP, Azure, and most CI/CD platforms so applications authenticate without hardcoded credentials

👎 Disadvantages

  • Vault itself must be deployed, kept available, unsealed after restart, backed up, and monitored; if it goes down, no application can fetch secrets
  • writing correct policies, configuring auth backends, and enabling secrets engines requires familiarity with Vault's own configuration language and CLI
  • a Vault cluster running in HA mode with integrated storage requires at least three nodes to maintain quorum
  • managed cloud alternatives such as AWS Secrets Manager or Azure Key Vault cover most use cases without the operational burden of self-hosting

🛠️ Typical use cases

  • production backends that connect to databases, third-party APIs, or cloud services using credentials that must not appear in source code or environment files
  • Kubernetes-based systems where each pod receives a short-lived token scoped to only the secrets it needs, using the Vault Agent sidecar or the Vault Secrets Operator
  • applications in regulated industries where SOC 2, PCI-DSS, or HIPAA audits require a timestamped log of who accessed which credential
  • platforms where database passwords are generated per-application and expire automatically using Vault's database secrets engine

⚠️ Common mistakes (anti-patterns)

  • granting a policy that allows access to all secrets paths such as secret/* instead of restricting each application to the exact paths it needs, so that a compromised service can read every secret in the store
  • using static long-lived secrets instead of Vault's dynamic secrets engine for databases, so the same password is reused indefinitely rather than being regenerated and expired per lease
  • disabling or ignoring the audit log device, removing the ability to detect when a secret was accessed by an unauthorised caller
  • running a single Vault node without HA mode or regular snapshots, so a server restart or disk failure causes a complete outage for every application that depends on secrets
  • rotating the unseal key or root token infrequently and storing them in a location that developers have access to, which defeats the purpose of the unseal mechanism

Overly broad policies are one of the most common misconfigurations. A policy that grants read access to secret/* allows any application with that policy to read every secret in the store. When one of those applications is compromised, the attacker can enumerate and download all secrets. Write one policy per application and limit each to the exact path it needs.

💡 How to build on it wisely

Recommended approach:

  1. Start with the KV v2 secrets engine for storing static secrets. For each application, create a dedicated policy that grants read access only to the paths that application needs, nothing broader.
  2. Use a platform-native auth method so applications never need a hardcoded Vault token. On Kubernetes use the Kubernetes auth backend; on AWS use IAM auth; on bare metal use AppRole with a short-lived secret ID.
  3. Enable the database secrets engine for any service that connects to a database. Vault will generate a unique username and password per application lease and revoke it automatically when the lease expires.
  4. Configure at least one audit log device on day one. The file audit device writes a JSON entry for every request and response. Ship those logs to your centralised logging system so access can be queried during an incident.
  5. Run Vault in HA mode with Raft integrated storage using at least three nodes. Take daily snapshots with vault operator raft snapshot save and test restoring from a snapshot before going to production.

If the team spends more time maintaining Vault than building the product, or if the cloud platform already provides a managed service such as AWS Secrets Manager that covers all current requirements, these are concrete signals to use the managed alternative and return to self-hosted Vault only when auditing, dynamic secret generation, or cross-cloud secret federation become real requirements.

Feedback & Sharing

Give us your thoughts on this page, or share it with others who may find it useful.

Share with your network:

Feedback

Found this helpful? Let me know what you think or suggest improvements 👉 Contact me.