Complete Guide to Redis: Beyond Simple Caching
Redis is more than a cache. Master pub/sub, streams, sorted sets, rate limiting, sessions, job queues, and geospatial queries with practical examples.
Redis Is Not Just a Cache
Most developers learn Redis as a key-value cache: SET, GET, EXPIRE. That scratches about 20% of what Redis can do. Redis is an in-memory data structure server with pub/sub messaging, streams, sorted sets, HyperLogLog, geospatial indexes, and Lua scripting.
Database replication: the primary handles writes while replicas serve read queries via WAL streaming.
At TechSaaS, our shared Redis instance (just 4MB of RAM for the container) handles caching, session management, rate limiting, and real-time messaging across a dozen services.
Data Structures You Should Know
Strings (The Basics)
SET user:1001:name "Alice"
GET user:1001:name # "Alice"
# Atomic counter
INCR page:home:views # 1
INCR page:home:views # 2
# Set with expiration
SETEX session:abc123 3600 '{"userId": 1001}'
# Set only if not exists (distributed lock)
SET lock:resource:42 "owner-1" NX EX 30
Hashes (Object Storage)
HSET user:1001 name "Alice" email "[email protected]" plan "pro"
HGET user:1001 name # "Alice"
HGETALL user:1001 # All fields and values
HINCRBY user:1001 login_count 1 # Atomic field increment
Perfect for storing objects without serialization overhead.
Get more insights on Tutorials
Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.
Lists (Queues and Stacks)
# Queue (FIFO)
RPUSH queue:emails "msg1" "msg2" "msg3"
LPOP queue:emails # "msg1"
# Blocking pop (waits for new items)
BLPOP queue:emails 30 # Wait up to 30 seconds
# Recent items (keep last 100)
LPUSH recent:activity "user logged in"
LTRIM recent:activity 0 99
Sorted Sets (Leaderboards and Rankings)
# Add scores
ZADD leaderboard 1500 "alice" 2200 "bob" 1800 "charlie"
# Top 3 players
ZREVRANGE leaderboard 0 2 WITHSCORES
# bob: 2200, charlie: 1800, alice: 1500
# Rank of a player
ZREVRANK leaderboard "alice" # 2 (0-indexed)
# Increment score
ZINCRBY leaderboard 500 "alice" # alice now 2000
Sets (Unique Collections)
SADD online:users "user:1001" "user:1002" "user:1003"
SCARD online:users # 3
SISMEMBER online:users "user:1001" # 1 (true)
SREM online:users "user:1002" # Remove
# Intersection: users online AND premium
SADD premium:users "user:1001" "user:1004"
SINTER online:users premium:users # "user:1001"
Performance optimization funnel: each layer of optimization compounds to dramatically reduce response times.
Pattern 1: Rate Limiting
Sliding window rate limiter using sorted sets:
import redis
import time
r = redis.Redis()
def is_rate_limited(user_id: str, limit: int = 100, window: int = 60) -> bool:
"""Allow 'limit' requests per 'window' seconds."""
key = f"ratelimit:{user_id}"
now = time.time()
pipe = r.pipeline()
# Remove old entries outside the window
pipe.zremrangebyscore(key, 0, now - window)
# Add current request
pipe.zadd(key, {str(now): now})
# Count requests in window
pipe.zcard(key)
# Set key expiration
pipe.expire(key, window)
results = pipe.execute()
request_count = results[2]
return request_count > limit
Pattern 2: Session Management
import json
import secrets
def create_session(user_id: int, metadata: dict) -> str:
session_id = secrets.token_urlsafe(32)
session_data = {
"user_id": user_id,
"created_at": time.time(),
**metadata
}
r.setex(
f"session:{session_id}",
3600 * 24, # 24-hour expiry
json.dumps(session_data)
)
return session_id
def get_session(session_id: str) -> dict | None:
data = r.get(f"session:{session_id}")
if data:
# Refresh TTL on access (sliding expiration)
r.expire(f"session:{session_id}", 3600 * 24)
return json.loads(data)
return None
def destroy_session(session_id: str):
r.delete(f"session:{session_id}")
Pattern 3: Pub/Sub for Real-Time Events
# Publisher
def publish_event(channel: str, event: dict):
r.publish(channel, json.dumps(event))
publish_event("notifications:user:1001", {
"type": "new_message",
"from": "bob",
"preview": "Hey, check out this PR..."
})
# Subscriber (separate process)
pubsub = r.pubsub()
pubsub.psubscribe("notifications:user:*") # Pattern subscribe
for message in pubsub.listen():
if message["type"] == "pmessage":
channel = message["channel"].decode()
data = json.loads(message["data"])
user_id = channel.split(":")[-1]
send_websocket_notification(user_id, data)
Pattern 4: Redis Streams (Event Log)
Streams are like Kafka topics inside Redis — durable, consumer groups, acknowledgment:
# Produce events
r.xadd("events:orders", {
"order_id": "ORD-001",
"amount": "99.99",
"customer": "alice"
})
# Create consumer group
r.xgroup_create("events:orders", "order-processors", id="0", mkstream=True)
# Consume with acknowledgment
while True:
messages = r.xreadgroup(
groupname="order-processors",
consumername="worker-1",
streams={"events:orders": ">"},
count=10,
block=5000 # Wait 5s for new messages
)
for stream, entries in messages:
for msg_id, data in entries:
print(f"Processing order: {data}")
# Process...
r.xack("events:orders", "order-processors", msg_id)
Pattern 5: Geospatial Queries
# Add locations
GEOADD restaurants 77.2090 28.6139 "delhi-hq"
GEOADD restaurants 72.8777 19.0760 "mumbai-branch"
GEOADD restaurants 80.2707 13.0827 "chennai-branch"
# Find restaurants within 500km of a point
GEOSEARCH restaurants FROMLONLAT 77.5946 12.9716 BYRADIUS 500 KM ASC
# Returns: chennai-branch
# Distance between two locations
GEODIST restaurants "delhi-hq" "mumbai-branch" km
# "1153.3"
Free Resource
Free Cloud Architecture Checklist
A 47-point checklist covering security, scalability, cost optimization, and disaster recovery for production cloud environments.
Pattern 6: Distributed Locks
import uuid
class RedisLock:
def __init__(self, redis_client, resource, ttl=30):
self.redis = redis_client
self.resource = f"lock:{resource}"
self.token = str(uuid.uuid4())
self.ttl = ttl
def acquire(self) -> bool:
return self.redis.set(
self.resource, self.token, nx=True, ex=self.ttl
)
def release(self):
# Lua script ensures atomicity: only release if we own the lock
script = """
if redis.call('get', KEYS[1]) == ARGV[1] then
return redis.call('del', KEYS[1])
else
return 0
end
"""
self.redis.eval(script, 1, self.resource, self.token)
# Usage
lock = RedisLock(r, "payment:process:order-123")
if lock.acquire():
try:
process_payment("order-123")
finally:
lock.release()
A well-structured configuration file is the foundation of reproducible infrastructure.
Memory Optimization
Redis stores everything in RAM. Optimize memory usage:
# Check memory usage per key
redis-cli MEMORY USAGE "user:1001"
# Set max memory and eviction policy
CONFIG SET maxmemory 256mb
CONFIG SET maxmemory-policy allkeys-lru
# Use compressed data types
CONFIG SET hash-max-ziplist-entries 128
CONFIG SET hash-max-ziplist-value 64
Tips:
- Use hashes instead of multiple string keys (saves ~50% memory)
- Set TTLs on everything — stale data is wasted RAM
- Use
SCANinstead ofKEYSin production (non-blocking) - Monitor with
INFO memoryandredis-cli --bigkeys
Redis is one of the most versatile tools in any developer's toolkit. Master its data structures and you will find yourself reaching for it far beyond simple caching.
Related Service
Cloud Solutions
Let our experts help you build the right technology strategy for your business.
Need help with tutorials?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.
We Will Build You a Demo Site — For Free
Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.
No spam. No contracts. Just a free demo.