A high-performance distributed caching layer with consistent hashing, LRU eviction, and TTL support.
| Metric | Value |
|---|---|
| Total SLOC | 7,316 |
| Source Files | 38 |
| .js | 3,868 |
| .md | 1,867 |
| .tsx | 992 |
| .ts | 340 |
| .json | 106 |
- Consistent Hashing: Even key distribution with virtual nodes
- LRU Eviction: Automatic eviction when memory/size limits are reached
- TTL Support: Time-to-live for automatic key expiration
- Distributed Architecture: Multiple cache nodes with a coordinator
- Admin Dashboard: Real-time monitoring of cluster health and statistics
- HTTP API: Simple REST API for cache operations
┌─────────────────────────────────────────────────────────────┐
│ Frontend Dashboard │
│ (React + TypeScript) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Coordinator │
│ (Consistent Hash Router) │
│ Port: 3000 │
└─────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Cache Node 1 │ │ Cache Node 2 │ │ Cache Node 3 │
│ Port: 3001 │ │ Port: 3002 │ │ Port: 3003 │
│ │ │ │ │ │
│ ┌───────────┐ │ │ ┌───────────┐ │ │ ┌───────────┐ │
│ │ LRU Cache │ │ │ │ LRU Cache │ │ │ │ LRU Cache │ │
│ │ + TTL │ │ │ │ + TTL │ │ │ │ + TTL │ │
│ └───────────┘ │ │ └───────────┘ │ │ └───────────┘ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose downServices will be available at:
- Frontend Dashboard: http://localhost:5173
- Coordinator API: http://localhost:3000
- Cache Node 1: http://localhost:3001
- Cache Node 2: http://localhost:3002
- Cache Node 3: http://localhost:3003
- Node.js 20+
- npm 10+
cd backend
npm install
# Start cache nodes (in separate terminals)
npm run dev:server1 # Port 3001
npm run dev:server2 # Port 3002
npm run dev:server3 # Port 3003
# Start coordinator
npm run coordinator # Port 3000cd frontend
npm install
npm run dev # Port 5173All operations go through the coordinator (port 3000), which routes requests to the appropriate cache node using consistent hashing.
curl -X POST http://localhost:3000/cache/mykey \
-H "Content-Type: application/json" \
-d '{"value": "Hello World", "ttl": 3600}'Response:
{
"key": "mykey",
"ttl": 3600,
"message": "Value set successfully",
"_routing": { "nodeUrl": "http://localhost:3001" }
}curl http://localhost:3000/cache/mykeyResponse:
{
"key": "mykey",
"value": "Hello World",
"ttl": 3595,
"_routing": { "nodeUrl": "http://localhost:3001" }
}curl -X DELETE http://localhost:3000/cache/mykeycurl -X POST http://localhost:3000/cache/counter/incr \
-H "Content-Type: application/json" \
-d '{"delta": 1}'curl http://localhost:3000/keyscurl "http://localhost:3000/keys?pattern=user:*"curl http://localhost:3000/cluster/locate/mykeyResponse:
{
"key": "mykey",
"nodeUrl": "http://localhost:3001",
"allNodes": ["http://localhost:3001", "http://localhost:3002", "http://localhost:3003"]
}curl -X POST http://localhost:3000/flushcurl http://localhost:3000/cluster/infoResponse:
{
"coordinator": { "port": 3000, "uptime": 123.45 },
"ring": { "virtualNodes": 150, "activeNodes": [...] },
"nodes": [...]
}curl http://localhost:3000/cluster/statsResponse:
{
"totalNodes": 3,
"totalHits": 1234,
"totalMisses": 56,
"totalSize": 5000,
"totalMemoryMB": "12.34",
"overallHitRate": "95.65",
"perNode": [...]
}curl -X POST http://localhost:3000/admin/node \
-H "Content-Type: application/json" \
-d '{"url": "http://localhost:3004"}'curl -X DELETE http://localhost:3000/admin/node \
-H "Content-Type: application/json" \
-d '{"url": "http://localhost:3004"}'curl -X POST http://localhost:3000/admin/health-checkYou can also access cache nodes directly (bypassing consistent hashing):
# Health check
curl http://localhost:3001/health
# Get node info
curl http://localhost:3001/info
# Get node stats
curl http://localhost:3001/stats| Variable | Default | Description |
|---|---|---|
PORT |
3001 |
Server port |
NODE_ID |
node-{PORT} |
Unique node identifier |
MAX_SIZE |
10000 |
Maximum number of cache entries |
MAX_MEMORY_MB |
100 |
Maximum memory usage in MB |
DEFAULT_TTL |
0 |
Default TTL in seconds (0 = no expiration) |
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
Coordinator port |
CACHE_NODES |
http://localhost:3001,... |
Comma-separated list of cache node URLs |
HEALTH_CHECK_INTERVAL |
5000 |
Health check interval in ms |
VIRTUAL_NODES |
150 |
Number of virtual nodes per physical node |
# Terminal 1: Cache Node 1
PORT=3001 NODE_ID=node-1 node src/server.js
# Terminal 2: Cache Node 2
PORT=3002 NODE_ID=node-2 node src/server.js
# Terminal 3: Cache Node 3
PORT=3003 NODE_ID=node-3 node src/server.js
# Terminal 4: Coordinator
PORT=3000 CACHE_NODES=http://localhost:3001,http://localhost:3002,http://localhost:3003 node src/coordinator.js# Scale cache nodes
docker-compose up -d --scale cache-node-1=1 --scale cache-node-2=1 --scale cache-node-3=1Keys are distributed across nodes using consistent hashing with virtual nodes:
- Each physical node gets 150 virtual nodes on the hash ring
- Keys are hashed and assigned to the next clockwise virtual node
- When a node is added/removed, only ~1/N keys are remapped
- Virtual nodes ensure even distribution across physical nodes
When the cache reaches its limits (size or memory):
- Least Recently Used (LRU) entries are evicted first
- Memory is estimated based on JSON serialization size
- Eviction happens automatically on SET operations
Keys can be set with a TTL (Time-To-Live):
- Lazy expiration: Keys are checked and deleted on access
- Active expiration: Background process samples and expires keys
- TTL of 0 means no expiration
- TTL of -1 (in GET response) means the key has no expiration
The frontend dashboard provides:
- Dashboard: Overview of cluster health and statistics
- Keys: Browse, search, and manage cache keys
- Cluster: Manage nodes and view hash ring
- Test: Interactive testing of cache operations
distributed-cache/
├── backend/
│ ├── src/
│ │ ├── lib/
│ │ │ ├── consistent-hash.js # Consistent hashing implementation
│ │ │ └── lru-cache.js # LRU cache with TTL
│ │ ├── server.js # Cache node server
│ │ └── coordinator.js # Request router/coordinator
│ ├── package.json
│ ├── Dockerfile
│ └── Dockerfile.coordinator
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── routes/ # TanStack Router routes
│ │ ├── stores/ # Zustand stores
│ │ ├── services/ # API clients
│ │ └── types/ # TypeScript types
│ ├── package.json
│ └── Dockerfile
├── docker-compose.yml
├── architecture.md
├── system-design-answer.md
└── README.md
cd backend
npm testMIT
- Consistent Hashing and Random Trees - The original paper introducing consistent hashing for distributed systems
- Scaling Memcache at Facebook - How Facebook scaled Memcached to handle billions of requests
- Redis Cluster Specification - Official documentation on Redis cluster architecture and hash slot distribution
- Memcached Internals - Understanding Memcached's slab allocator and LRU eviction
- A Guide to Consistent Hashing - Practical explanation of consistent hashing with virtual nodes
- Cache Invalidation Strategies - Overview of cache-aside, write-through, and write-behind patterns
- How Discord Stores Billions of Messages - Real-world caching and data storage at scale
- Dynamo: Amazon's Highly Available Key-value Store - Foundational paper on distributed key-value stores