Skip to content
SumGuy's Ramblings
Go back

HashiCorp Vault: Stop Hardcoding Secrets Like It's 2012

The .env File Is Not a Secrets Manager

You know the pattern. It starts reasonably: you need a database password in your application, so you put it in a .env file and add .env to .gitignore. Then you need the same password in a second service, so you copy it. Then someone deploys from a different machine and the .env file isn’t there, so they hardcode it in the config. Then it ends up in a Docker Compose file that gets committed. Then someone makes the repo public for a demo.

AWS_SECRET_KEY=supersecretpassword123 with 3,247 commits of history, accessible to anyone who finds the repo.

This is not hypothetical. It happens constantly, to developers who absolutely know better but are solving a different problem in the moment. The fundamental issue isn’t carelessness — it’s that there’s no good alternative that’s easy enough to use.

HashiCorp Vault is that alternative. It’s a secrets management system that stores secrets encrypted, controls access through policies, provides audit logs of every secret access, and most interestingly: generates dynamic secrets that expire automatically. Your application never needs a long-lived credential again.


Vault Concepts Before You Get Lost

Secrets Engines: Pluggable backends that store or generate secrets. KV (key-value) stores static secrets. Database engine generates short-lived database credentials. PKI engine is a certificate authority. Each mount point is an instance of an engine.

Auth Methods: How clients authenticate to Vault. AppRole (applications), Kubernetes (pods), AWS IAM, GitHub, LDAP, userpass. After authenticating, you get a token.

Tokens: What Vault uses internally. Every authenticated entity gets a token. Tokens have policies attached, TTLs, usage limits.

Policies: HCL documents that define what paths a token can read/write. Tokens can only access paths their policies allow.

Leases: Dynamic secrets come with leases — expiration times. When a lease expires, the secret is revoked (database password is deleted, certificate expires). This is the key insight of Vault.


Docker Compose Setup

# docker-compose.yml
version: '3.8'

services:
  vault:
    image: hashicorp/vault:latest
    container_name: vault
    ports:
      - "8200:8200"
    environment:
      VAULT_ADDR: 'http://0.0.0.0:8200'
      VAULT_DEV_ROOT_TOKEN_ID: 'root'  # Dev mode only — never in production
    cap_add:
      - IPC_LOCK              # Prevents memory from being swapped to disk
    volumes:
      - vault-data:/vault/file
      - ./vault-config:/vault/config
    command: vault server -config=/vault/config/vault.hcl
    restart: unless-stopped

volumes:
  vault-data:

For development, the simpler dev mode:

services:
  vault:
    image: hashicorp/vault:latest
    container_name: vault
    ports:
      - "8200:8200"
    environment:
      VAULT_DEV_ROOT_TOKEN_ID: 'root'
      VAULT_DEV_LISTEN_ADDRESS: '0.0.0.0:8200'
    cap_add:
      - IPC_LOCK
    command: vault server -dev

Dev mode starts unsealed with a root token. Don’t use for production — it stores everything in memory and loses it on restart.

Production Vault Configuration

# vault-config/vault.hcl
storage "file" {
  path = "/vault/file"
}

listener "tcp" {
  address     = "0.0.0.0:8200"
  tls_disable = "true"  # Use a reverse proxy for TLS in production
}

ui = true
api_addr = "http://vault:8200"

Initializing and Unsealing

export VAULT_ADDR='http://localhost:8200'

# Initialize Vault (first run only)
vault operator init -key-shares=5 -key-threshold=3

# This outputs:
# Unseal Key 1: xxxxx
# Unseal Key 2: xxxxx
# Unseal Key 3: xxxxx
# Unseal Key 4: xxxxx
# Unseal Key 5: xxxxx
# Initial Root Token: hvs.xxxxx
#
# SAVE THESE. STORE THEM SEPARATELY. SERIOUSLY.

Vault starts sealed — encrypted data on disk that nothing can read. Unsealing requires providing enough key shares to reconstruct the master key (3 out of 5 in this example). This is the Shamir’s Secret Sharing scheme.

# Unseal (run 3 times with 3 different keys)
vault operator unseal KEY1
vault operator unseal KEY2
vault operator unseal KEY3

# Check status
vault status

# Login with root token (for initial setup)
vault login hvs.xxxxx

After unseal, Vault is operational. On restart, you must unseal again. This is annoying for automated deployments — see Auto-Unseal below.


KV v2: Static Secrets with Versioning

The KV (key-value) secrets engine stores arbitrary key-value pairs with versioning. KV v2 keeps previous versions of secrets, allowing rollback.

# Enable KV v2 at path 'secret/'
vault secrets enable -path=secret kv-v2

# Store a secret
vault kv put secret/myapp/database \
  username=myapp_user \
  password=SuperSecretPassword123 \
  host=postgres:5432 \
  dbname=myapp

# Read it back
vault kv get secret/myapp/database

# Read just the password
vault kv get -field=password secret/myapp/database

# Update (creates new version, keeps old)
vault kv put secret/myapp/database password=NewPassword456

# See all versions
vault kv metadata get secret/myapp/database

# Get specific version
vault kv get -version=1 secret/myapp/database

# Delete current version (data recoverable)
vault kv delete secret/myapp/database

# Permanently delete all versions
vault kv metadata delete secret/myapp/database

AppRole: Auth for Your Applications

UserPass auth is for humans. AppRole is for applications — a machine-friendly auth method using a RoleID and SecretID.

# Enable AppRole
vault auth enable approle

# Create a policy for your app
vault policy write myapp-policy - << 'EOF'
path "secret/data/myapp/*" {
  capabilities = ["read", "list"]
}

path "secret/metadata/myapp/*" {
  capabilities = ["read", "list"]
}
EOF

# Create an AppRole
vault write auth/approle/role/myapp \
  token_policies="myapp-policy" \
  token_ttl=1h \
  token_max_ttl=4h \
  secret_id_ttl=24h \
  secret_id_num_uses=10

# Get RoleID (this is not secret, embed in app config)
vault read auth/approle/role/myapp/role-id

# Get SecretID (this IS secret, generate fresh for each deployment)
vault write -f auth/approle/role/myapp/secret-id

In your application:

import hvac

client = hvac.Client(url='http://vault:8200')

# Authenticate with AppRole
client.auth.approle.login(
    role_id='your-role-id',
    secret_id='your-secret-id'
)

# Read a secret
secret = client.secrets.kv.v2.read_secret_version(
    path='myapp/database',
    mount_point='secret'
)

db_password = secret['data']['data']['password']
# Or with Vault CLI
vault write auth/approle/login role_id="..." secret_id="..."
# Returns a token

VAULT_TOKEN=the_token vault kv get secret/myapp/database

Dynamic Database Credentials: The Mind-Blowing Part

This is the feature that changes how you think about credentials. Instead of giving your application a static database password, Vault generates a fresh username and password with a TTL. When the TTL expires, Vault deletes the user from the database. Your application never has a long-lived credential.

# Enable database secrets engine
vault secrets enable database

# Configure PostgreSQL connection
vault write database/config/myapp-postgres \
  plugin_name=postgresql-database-plugin \
  allowed_roles="myapp-role" \
  connection_url="postgresql://{{username}}:{{password}}@postgres:5432/myapp?sslmode=disable" \
  username="vault_admin" \
  password="vault_admin_password"

# Create a role that defines what credentials look like
vault write database/roles/myapp-role \
  db_name=myapp-postgres \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' INHERIT; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
  default_ttl="1h" \
  max_ttl="24h"

Now, when your application needs a database connection:

# Generate dynamic credentials
vault read database/creds/myapp-role

# Output:
# lease_id           database/creds/myapp-role/AbCdEfGhIjKl
# lease_duration     1h
# lease_renewable    true
# password           A1B2C3D4...
# username           v-approle-myapp-rol-AbCdEfGhIj

This username exists in PostgreSQL right now. In one hour, Vault revokes it — the user is deleted from the database. If your application gets compromised, the attacker’s database credentials expire in at most one hour. If you revoke the Vault lease immediately, they expire now.

import hvac

client = hvac.Client(url='http://vault:8200', token='your_app_token')

# Get fresh database credentials
creds = client.secrets.database.generate_credentials(name='myapp-role')
db_user = creds['data']['username']
db_pass = creds['data']['password']

# Connect to database with these credentials
# ... your database code here ...

PKI Secrets Engine

Vault can be your internal certificate authority. Issue certificates on demand with appropriate TTLs.

# Enable PKI engine
vault secrets enable pki
vault secrets tune -max-lease-ttl=87600h pki

# Generate root CA
vault write pki/root/generate/internal \
  common_name="My Internal CA" \
  ttl=87600h

# Configure URLs
vault write pki/config/urls \
  issuing_certificates="http://vault:8200/v1/pki/ca" \
  crl_distribution_points="http://vault:8200/v1/pki/crl"

# Create an intermediate CA (recommended)
vault secrets enable -path=pki_int pki
vault secrets tune -max-lease-ttl=43800h pki_int

# Create a role for issuing certificates
vault write pki_int/roles/my-domain \
  allowed_domains="internal,example.internal" \
  allow_subdomains=true \
  max_ttl=720h \
  key_type=ec \
  key_bits=256

# Issue a certificate
vault write pki_int/issue/my-domain \
  common_name="api.internal" \
  alt_names="api.example.internal" \
  ttl=24h

Your applications can request fresh certificates programmatically. Combine this with cert-manager in Kubernetes or consul-template for automated certificate rotation.


Auto-Unseal with Cloud KMS

The unseal ritual is fine for a home lab. For any production deployment where Vault needs to restart automatically (server reboot, container restart), you need auto-unseal.

# vault-config/vault.hcl with AWS KMS auto-unseal
seal "awskms" {
  region     = "us-east-1"
  kms_key_id = "arn:aws:kms:us-east-1:123456789:key/your-key-id"
}
# With GCP KMS
seal "gcpckms" {
  project    = "my-project"
  region     = "global"
  key_ring   = "vault-keyring"
  crypto_key = "vault-key"
}

When auto-unseal is configured, Vault encrypts its master key with the cloud KMS key. On startup, it calls the KMS API to decrypt its master key and unseals automatically. The master key never exists unencrypted in your infrastructure — only the cloud KMS can decrypt it.


The Migration Path

You don’t have to migrate everything at once. A practical progression:

  1. Start with KV v2 for static secrets that currently live in .env files or configs
  2. Add AppRole auth so applications can fetch their own secrets
  3. Replace database passwords with dynamic credentials for your most sensitive services
  4. Add PKI if you’re using internal certificates (and you should be)
  5. Consider auto-unseal when you need automatic restarts

The payoff compounds. Once your applications authenticate to Vault and retrieve their own secrets, you can rotate any credential without touching application configs. You can see exactly who accessed which secret and when. You can revoke specific credentials instantly if something is compromised.

AWS_SECRET_KEY=supersecretpassword123 can stay in 2012 where it belongs.


Share this post on:

Previous Post
Auditd & Audit Logging: Know Exactly Who Touched What on Your Server
Next Post
Woodpecker CI vs Drone CI: Lightweight Pipelines for People Who Hate Waiting