03
Concepts
Caching
The fastest database query is one that never hits the database. Everything here is about making that work reliably at scale.
Fundamentals
What a cache is, where it lives, and the core trade-offs between hit rate, freshness, and memory cost.
Hit RateCache MissFreshnessMemory Cost
Open topic →
Writing Strategies
Read-through, write-through, write-behind, and write-around — how data gets into the cache and what breaks when it doesn't.
Write-ThroughWrite-BehindRead-ThroughWrite-Around
Open topic →
Eviction Policies
LRU, LFU, FIFO, and TTL. What gets evicted when memory fills up and how to pick the right policy for your workload.
LRULFUFIFOTTL
Open topic →
Population Strategies
Refresh-ahead and cache warming — how to keep caches populated without a thundering herd on every cold start.
Refresh-AheadCache WarmingLazy LoadingPre-Population
Open topic →
Cache Invalidation
TTL-based, event-driven, write-through, and versioning strategies. The hardest problem in caching is knowing when to throw data away.
TTLEvent-DrivenVersioningStale-While-Revalidate
Open topic →
Distributed Caching
Consistent hashing, cache coherence, replication, two-level caching, and what happens when a cache node dies mid-traffic.
Consistent HashingCache CoherenceReplicationNode Failure
Open topic →
Cache Problems
Cache stampede, cold start, penetration, and avalanche — the failure modes that turn a cache from a fix into the cause of the outage.
StampedeCold StartPenetrationAvalanche
Open topic →
Redis
Data structures, patterns, persistence options, streams, and the single-threaded event loop that makes Redis predictably fast.
Data StructuresPersistenceStreamsEvent Loop
Open topic →