January 5, 2025

Redis Key Naming Conventions That Scale

CorbinCorbin

Six months into the project, your Redis looks like this:

user_123
User:123
user:123:profile
123:user:settings
userSettings_123
cache_user_123
CACHE:USER:123

All for the same user. Created by different developers. At different times. With zero documentation.

Key naming seems trivial until it isn't. Bad naming makes debugging harder, automation impossible, and migrations painful.

Why Key Names Matter

1. Pattern Matching

Redis SCAN and pattern monitors rely on predictable key structures.

# This works
SCAN 0 MATCH user:*:profile

# This doesn't help when your keys are
# user_123_profile, User:123:Profile, profile:user:123

Inconsistent naming means you can't query your own data efficiently.

2. Access Control (Redis ACL)

Redis 6+ ACLs support key patterns:

ACL SETUSER app-readonly on >pass ~user:* +@read

This only works if all user-related keys actually start with user:.

3. Memory Analysis

Want to know how much memory user data consumes vs cache vs sessions?

With good naming: Pattern monitor on user:*, cache:*, session:* - instant breakdown.

With bad naming: Write a custom script to categorize keys by guessing.

4. Multi-Tenancy

If you ever need to isolate data by tenant, environment, or region:

tenant:acme:user:123
prod:cache:api:response:abc
region:us-west:rate:limit:api

Retrofit this into a chaotic key structure? Good luck.

The Standard Format

<namespace>:<entity>:<id>:<attribute>

Examples:

user:1001:profile
user:1001:sessions
cache:api:users:page:1
queue:email:pending
lock:order:process:5001
rate:limit:api:user:1001

Rules

Rule Example Rationale
Use colons as separators user:1001:profile Redis convention, works with SCAN
Lowercase everything user:1001 not User:1001 Case-sensitive matching is error-prone
Namespace first cache:user:1001 not user:1001:cache Enables pattern matching by category
IDs after entity type user:1001 not 1001:user Consistent left-to-right reading
No special characters Avoid user@1001 or user/1001 Some clients encode these weirdly

Common Namespaces

user:*          # User-related data
session:*       # Session data
cache:*         # Cached values (should have TTL)
queue:*         # Job queues
lock:*          # Distributed locks
rate:*          # Rate limiting
temp:*          # Temporary data (always with TTL)
config:*        # Configuration values
counter:*       # Atomic counters

Practical Patterns

User Data

user:1001:profile          # Hash: name, email, avatar
user:1001:settings         # Hash: preferences
user:1001:sessions         # Set: active session IDs
user:1001:notifications    # List: recent notifications
user:1001:followers        # Set: follower user IDs
user:1001:following        # Set: following user IDs

Caching

cache:api:users:list:page:1            # API response cache
cache:api:users:detail:1001            # Single resource cache
cache:db:query:<hash>                  # Database query cache
cache:compute:recommendation:1001      # Computed value cache

Always set TTL on cache keys:

SET cache:api:users:list:page:1 "{...}" EX 300

Sessions

session:abc123xyz                      # Hash: userId, createdAt, data
session:user:1001:active               # Set: all active session IDs for user

Job Queues

If you're not using BullMQ/Sidekiq (which have their own conventions):

queue:email:pending     # List: jobs to process
queue:email:processing  # List: currently processing
queue:email:failed      # List: failed jobs
queue:email:job:12345   # Hash: individual job data

Rate Limiting

rate:limit:api:user:1001               # String: request count
rate:limit:api:ip:192.168.1.1          # String: request count
rate:limit:login:user:1001             # String: attempt count

Use with INCR and EXPIRE:

INCR rate:limit:api:user:1001
EXPIRE rate:limit:api:user:1001 60

Distributed Locks

lock:order:process:5001                # String: worker ID holding lock
lock:inventory:update:sku:ABC123       # String: lock holder

Use with SET NX EX:

SET lock:order:process:5001 "worker-42" NX EX 30

Environment Prefixes

For multi-environment setups sharing Redis:

prod:user:1001:profile
staging:user:1001:profile
dev:user:1001:profile

Or use separate Redis databases (0-15):

SELECT 0  # Production
SELECT 1  # Staging

Better yet, use separate Redis instances entirely. Cheaper isn't worth the risk.

Versioning Keys

Schema changes happen. Old format:

user:1001:profile = {"name": "John"}

New format:

user:1001:profile = {"firstName": "John", "lastName": "Doe"}

Options:

Option 1: Key Version Prefix

v1:user:1001:profile
v2:user:1001:profile

Migrate gradually. Both versions can coexist.

Option 2: Version in Value

{
  "_version": 2,
  "firstName": "John",
  "lastName": "Doe"
}

Application code handles migration on read.

Option 3: Feature Flag Migration

Deploy new code that writes both formats. Once all reads handle new format, stop writing old format. Clean up old keys.

Anti-Patterns

1. Putting Data in Key Names

# Bad - key name changes when data changes
user:1001:plan:pro
user:1001:plan:free

# Good - data in value
user:1001:plan = "pro"

2. Overly Deep Nesting

# Too deep
app:prod:service:auth:user:1001:session:abc123:token:access

# Better
session:abc123   # With userId and other data inside

3. Timestamps in Keys

# Bad - creates infinite keys
log:2025:01:15:10:30:45:request:abc

# Good - use Sorted Sets
ZADD logs:requests 1705312245 "{...}"

4. Sequential IDs Without Prefix

# Bad - what is 12345?
12345 = "{...}"

# Good
order:12345 = "{...}"

Migration Strategy

Already have messy keys? Here's how to fix it:

1. Audit Current State

redis-cli --scan | cut -d: -f1 | sort | uniq -c | sort -rn

This shows the most common prefixes (or lack thereof).

2. Document New Convention

Write it down. Get team agreement. No exceptions.

3. Migrate Gradually

// Dual-write during transition
async function setUserProfile(userId, data) {
  await redis.set(`user:${userId}:profile`, JSON.stringify(data));
  // Keep old key for backward compatibility
  await redis.set(`user_${userId}_profile`, JSON.stringify(data));
}

// Read from new, fall back to old
async function getUserProfile(userId) {
  let data = await redis.get(`user:${userId}:profile`);
  if (!data) {
    data = await redis.get(`user_${userId}_profile`);
    if (data) {
      // Migrate on read
      await redis.set(`user:${userId}:profile`, data);
    }
  }
  return data ? JSON.parse(data) : null;
}

4. Clean Up Old Keys

Once migration is complete and verified:

redis-cli --scan --pattern "user_*_profile" | xargs redis-cli DEL

Tooling Help

With consistent naming, tools become useful:

Pattern Monitors: user:* shows all user data. cache:* shows all caches. Instant visibility.

Memory Analysis: Which namespace uses most memory? Group by prefix.

Access Audit: Which patterns are accessed most? Optimize accordingly.

Bulk Operations: Delete all cache? cache:* pattern, done.


Key naming isn't glamorous. But it's the foundation of a maintainable Redis setup. Spend 30 minutes agreeing on conventions now, or spend hours debugging naming chaos later.

Your future self will thank you.

Ready for Download

Try Redimo Today

Pattern Monitor, CRUD operations, SSH Tunneling.
Everything you need to manage Redis at light speed.

macOS & Windows