Skip to content

ErfanMomeniii/ratelimiter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Rate Limiter


Rate Limiter Logo

πŸš€ Enterprise-Grade Rate Limiting

High-performance with Redis clustering support

Sub-microsecond latency β€’ Zero allocations β€’ Production monitoring

Go Version License Performance Redis Support Enterprise Ready

Features

  • πŸš€ High Performance: <100ns per request, zero allocations
  • πŸ”„ Distributed: Redis-backed for multi-instance deployments
  • πŸ›‘οΈ Production Ready: Circuit breakers, health checks, monitoring
  • πŸ—οΈ Clean Architecture: Interface-based design, easy testing
  • πŸ“Š 4 Algorithms: Token Bucket, Leaky Bucket, Sliding Window, Fixed Window

Installation

go get github.com/erfanmomeniii/ratelimiter

Quick Start

1. Basic Rate Limiting

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/erfanmomeniii/ratelimiter"
)

func main() {
    // Create a rate limiter: 100 requests max, 10 per second
    limiter, err := ratelimiter.NewTokenBucket(ratelimiter.TokenBucketConfig{
        Capacity:   100,  // Burst capacity
        RefillRate: 10,   // Refill rate per second
    })
    if err != nil {
        log.Fatal(err)
    }

    ctx := context.Background()

    // Check if request is allowed
    allowed, waitTime := limiter.Allow(ctx)
    if allowed {
        fmt.Println("βœ… Request allowed")
        // Process your request here
    } else {
        fmt.Printf("❌ Rate limited, retry after %v\n", waitTime)
    }
}

2. Distributed Rate Limiting (Redis)

// For multiple service instances, use Redis
limiter, err := ratelimiter.NewRedisTokenBucket(ratelimiter.RedisTokenBucketConfig{
    Capacity:   1000, // Higher capacity for distributed
    RefillRate: 100,
    Key:        "api:global", // Shared key across instances
    Redis: ratelimiter.RedisConfig{
        Addr: "redis:6379",
    },
})

if err != nil {
    log.Fatal(err)
}

// Rate limit per user across all instances
allowed, waitTime := limiter.AllowWithKey(ctx, "user:123")
if !allowed {
    fmt.Printf("User rate limited globally, wait %v\n", waitTime)
}

3. HTTP Middleware

func main() {
    limiter, _ := ratelimiter.NewTokenBucket(ratelimiter.TokenBucketConfig{
        Capacity:   1000,
        RefillRate: 100,
    })

    // Apply rate limiting to all /api/ routes
    middleware := ratelimiter.HTTPMiddleware(ratelimiter.MiddlewareConfig{
        Limiter: limiter,
        KeyFunc: ratelimiter.KeyFuncs.ByIP, // Rate limit per IP
    })

    http.Handle("/api/", middleware(http.HandlerFunc(apiHandler)))
    http.ListenAndServe(":8080", nil)
}

func apiHandler(w http.ResponseWriter, r *http.Request) {
    w.Write([]byte(`{"status": "ok"}`))
}

Algorithm Selection

Choose based on your traffic patterns:

Algorithm Best For Pros Cons Performance
Token Bucket APIs, bursty traffic Allows bursts, smooth Memory usage ~114ns ⚑
Leaky Bucket Traffic shaping Constant output Fixed rate ~114ns ⚑
Sliding Window Precise control Accurate, no boundary issues Higher CPU ~315ns 🟑
Fixed Window Simple cases Easy, low overhead Boundary attacks ~219ns 🟑

Decision Guide

Do you need to handle traffic bursts?
β”œβ”€β”€ Yes β†’ Token Bucket (allows bursts while maintaining average rate)
└── No β†’ Do you need precise rate limiting?
    β”œβ”€β”€ Yes β†’ Sliding Window (accurate over rolling windows)
    └── No β†’ Do you want constant output rate?
        β”œβ”€β”€ Yes β†’ Leaky Bucket (smooths traffic)
        └── No β†’ Fixed Window (simple, fast)

Algorithm Examples

Token Bucket - API Rate Limiting

// API Gateway: Allow bursts up to 1000 requests, refill at 100/sec
limiter, _ := ratelimiter.NewTokenBucket(ratelimiter.TokenBucketConfig{
    Capacity:   1000,  // Burst capacity
    RefillRate: 100,   // Steady rate
})

// Perfect for: REST APIs, GraphQL endpoints, user-facing services

Leaky Bucket - Traffic Shaping

// Database writes: Smooth traffic to prevent overload
limiter, _ := ratelimiter.NewLeakyBucket(ratelimiter.LeakyBucketConfig{
    Capacity: 1000, // Absorb spikes
    LeakRate: 100,  // Constant processing rate
})

// Perfect for: Database operations, external API calls, queue processing

Sliding Window - Financial Transactions

// Banking API: Precise control over 1-minute windows
limiter, _ := ratelimiter.NewSlidingWindow(ratelimiter.SlidingWindowConfig{
    WindowSize:  time.Minute,
    MaxRequests: 100,  // Max per rolling minute
})

// Perfect for: Financial systems, billing APIs, critical operations

Fixed Window - Simple Logging

// Log aggregation: Simple rate limiting for log processing
limiter, _ := ratelimiter.NewFixedWindow(ratelimiter.FixedWindowConfig{
    WindowSize: time.Minute,
    MaxRequests: 1000, // Max per minute
})

// Perfect for: Logging, monitoring, non-critical operations

Configuration

Redis Setup

For distributed deployments across multiple service instances:

redisConfig := ratelimiter.RedisConfig{
    Addr:         "redis-cluster:6379",
    Password:     "your-password",     // optional
    DB:           0,                   // Redis database
    PoolSize:     20,                  // Connection pool
    DialTimeout:  5 * time.Second,
    ReadTimeout:  3 * time.Second,
    WriteTimeout: 3 * time.Second,
}

// Use Redis variant of any algorithm
limiter, _ := ratelimiter.NewRedisTokenBucket(ratelimiter.RedisTokenBucketConfig{
    Capacity:   10000,  // Higher capacity for distributed
    RefillRate: 1000,
    Key:        "api:global",
    Redis:      redisConfig,
})

Key Strategies

Choose keys based on your rate limiting scope:

// 1. Per User (most common)
limiter.AllowWithKey(ctx, "user:"+userID)

// 2. Per API Endpoint
limiter.AllowWithKey(ctx, "endpoint:"+r.URL.Path)

// 3. Per Client IP
limiter.AllowWithKey(ctx, "ip:"+getClientIP(r))

// 4. Combined (user + endpoint)
limiter.AllowWithKey(ctx, fmt.Sprintf("user:%s:endpoint:%s", userID, r.URL.Path))

// 5. Per Service Instance (for load balancing)
limiter.AllowWithKey(ctx, "service:"+serviceName+":user:"+userID)

Advanced Configuration

Circuit Breaker Integration

limiter, _ := ratelimiter.NewLimiterBuilder().
    TokenBucket(1000, 100).
    WithCircuitBreaker(ratelimiter.CircuitBreakerConfig{
        FailureThreshold: 10,
        RecoveryTimeout:  time.Minute,
    }).
    Build()

Adaptive Rate Limiting

limiter, _ := ratelimiter.NewLimiterBuilder().
    TokenBucket(100, 10).
    WithAdaptiveLimiting(ratelimiter.AdaptiveConfig{
        Enabled:          true,
        TargetUtilization: 0.8,  // Adjust to maintain 80% utilization
    }).
    Build()

Best Practices

πŸš€ Performance Optimization

Use In-Memory for Single Instance

// βœ… Good: Single service instance
limiter, _ := ratelimiter.NewTokenBucket(ratelimiter.TokenBucketConfig{
    Capacity:   1000,
    RefillRate: 100,
})
// Fast, no network overhead, zero allocations

Use Redis for Multiple Instances

// βœ… Good: Multiple service instances
limiter, _ := ratelimiter.NewRedisTokenBucket(ratelimiter.RedisTokenBucketConfig{
    Capacity:   10000,  // Higher capacity for distributed
    RefillRate: 1000,
    Redis:      redisConfig,
})
// Consistent rate limiting across all instances

Reuse Limiters

// βœ… Good: Reuse limiter instances
var apiLimiter ratelimiter.Limiter

func init() {
    apiLimiter, _ = ratelimiter.NewTokenBucket(ratelimiter.TokenBucketConfig{
        Capacity:   1000,
        RefillRate: 100,
    })
}

func handleRequest(w http.ResponseWriter, r *http.Request) {
    allowed, waitTime := apiLimiter.Allow(r.Context())
    // ... handle response
}

// ❌ Bad: Creating new limiter per request
func badHandler(w http.ResponseWriter, r *http.Request) {
    limiter, _ := ratelimiter.NewTokenBucket(config) // Wasteful!
    allowed, _ := limiter.Allow(r.Context())
}

πŸ›‘οΈ Production Readiness

Implement Proper HTTP Responses

func rateLimitedHandler(w http.ResponseWriter, r *http.Request) {
    allowed, waitTime := limiter.Allow(r.Context())

    if !allowed {
        // RFC 6585 compliant response
        w.Header().Set("Content-Type", "application/json")
        w.Header().Set("Retry-After", fmt.Sprintf("%.0f", waitTime.Seconds()))
        w.Header().Set("X-RateLimit-Reset", waitTime.String())

        w.WriteHeader(http.StatusTooManyRequests)
        json.NewEncoder(w).Encode(map[string]interface{}{
            "error": "rate_limit_exceeded",
            "retry_after": waitTime.Seconds(),
            "retry_at": time.Now().Add(waitTime).Format(time.RFC3339),
        })
        return
    }

    // Process request normally
    handleRequest(w, r)
}

Graceful Degradation

func resilientHandler(w http.ResponseWriter, r *http.Request) {
    allowed, waitTime := limiter.Allow(r.Context())

    if !allowed {
        if waitTime > 30*time.Second {
            // System under extreme load - degrade gracefully
            serveCachedResponse(w, r)
            return
        }

        if waitTime > 5*time.Second {
            // High load - reduce response quality
            serveBasicResponse(w, r)
            return
        }

        // Normal rate limiting
        sendRateLimitResponse(w, r, waitTime)
        return
    }

    // Full service
    serveFullResponse(w, r)
}

Monitor Rate Limiting Effectiveness

func monitorRateLimiting(ctx context.Context) {
    ticker := time.NewTicker(time.Minute)
    defer ticker.Stop()

    for {
        select {
        case <-ctx.Done():
            return
        case <-ticker.C:
            stats := limiter.Stats()

            // Calculate metrics
            successRate := float64(stats.TotalAllowed) / float64(stats.TotalRequests)
            rejectionRate := 1.0 - successRate

            // Log or send to monitoring system
            log.Printf("Rate Limiter Stats: success=%.2f%% rejected=%.2f%% total=%d",
                successRate*100, rejectionRate*100, stats.TotalRequests)

            // Alert on high rejection rates
            if rejectionRate > 0.10 { // >10% rejected
                alertHighRejectionRate(rejectionRate)
            }
        }
    }
}

πŸ”§ Operational Practices

Health Checks

func healthCheckHandler(w http.ResponseWriter, r *http.Request) {
    // Test rate limiter functionality
    limiter, ok := getLimiter("health")
    if !ok {
        http.Error(w, "limiter not found", http.StatusInternalServerError)
        return
    }

    // Quick functionality test
    allowed, _ := limiter.Allow(r.Context())
    if !allowed {
        limiter.Reset() // Reset if rate limited during health check
    }

    w.WriteHeader(http.StatusOK)
    w.Write([]byte("healthy"))
}

Configuration Management

type RateLimiterConfig struct {
    Algorithm     string
    Capacity      float64
    RefillRate    float64
    RedisAddr     string
    RedisPassword string
}

func loadRateLimiterConfig() (*RateLimiterConfig, error) {
    return &RateLimiterConfig{
        Algorithm:     getEnv("RATE_LIMITER_ALGORITHM", "token_bucket"),
        Capacity:      getEnvAsFloat("RATE_LIMITER_CAPACITY", 1000),
        RefillRate:    getEnvAsFloat("RATE_LIMITER_REFILL_RATE", 100),
        RedisAddr:     getEnv("REDIS_ADDR", "localhost:6379"),
        RedisPassword: getEnv("REDIS_PASSWORD", ""),
    }, nil
}

API Reference

Core Interface

All algorithms implement the same interface for consistency:

type Limiter interface {
    // Allow checks if a single request is allowed
    Allow(ctx context.Context) (allowed bool, waitTime time.Duration)

    // AllowN checks if N requests are allowed at once
    AllowN(ctx context.Context, n int) (allowed int, waitTime time.Duration)

    // Reset clears the limiter state
    Reset()

    // Stats returns current statistics
    Stats() Stats
}

Algorithm Constructors

In-Memory Algorithms

// Token Bucket: Best for bursty traffic
NewTokenBucket(TokenBucketConfig) (Limiter, error)

type TokenBucketConfig struct {
    Capacity     float64  // Maximum burst capacity
    RefillRate   float64  // Tokens added per second
    InitialTokens float64 // Starting tokens (default: Capacity)
}

// Leaky Bucket: Best for traffic shaping
NewLeakyBucket(LeakyBucketConfig) (Limiter, error)

type LeakyBucketConfig struct {
    Capacity    float64  // Maximum queue size
    LeakRate    float64  // Requests processed per second
    InitialWater float64 // Starting queue depth
}

// Sliding Window: Best for precise control
NewSlidingWindow(SlidingWindowConfig) (Limiter, error)

type SlidingWindowConfig struct {
    WindowSize  time.Duration // Total window duration
    MaxRequests int64         // Maximum requests per window
    BucketCount int           // Number of buckets for granularity
}

// Fixed Window: Best for simple cases
NewFixedWindow(FixedWindowConfig) (Limiter, error)

type FixedWindowConfig struct {
    WindowSize  time.Duration // Window duration
    MaxRequests int64         // Maximum requests per window
}

Distributed (Redis) Algorithms

Same interface, distributed across multiple instances:

NewRedisTokenBucket(RedisTokenBucketConfig) (Limiter, error)
NewRedisLeakyBucket(RedisLeakyBucketConfig) (Limiter, error)
NewRedisSlidingWindow(RedisSlidingWindowConfig) (Limiter, error)
NewRedisFixedWindow(RedisFixedWindowConfig) (Limiter, error)

type RedisConfig struct {
    Addr         string        // Redis address
    Password     string        // Optional password
    DB           int           // Redis database
    PoolSize     int           // Connection pool size
    DialTimeout  time.Duration // Connection timeout
    ReadTimeout  time.Duration // Read timeout
    WriteTimeout time.Duration // Write timeout
}

HTTP Middleware

func HTTPMiddleware(config MiddlewareConfig) func(http.Handler) http.Handler

type MiddlewareConfig struct {
    Limiter   Limiter                    // Rate limiter to use
    KeyFunc   func(*http.Request) string // Key generation function
    SkipFunc  func(*http.Request) bool   // Skip rate limiting function (optional)
}

// Built-in key functions
ratelimiter.KeyFuncs.ByIP                    // Rate limit by client IP
ratelimiter.KeyFuncs.ByUserID("X-User-ID")   // Rate limit by user ID header
ratelimiter.KeyFuncs.ByPath                  // Rate limit by request path

// Built-in skip functions
ratelimiter.SkipFuncs.SkipHealthChecks       // Skip /health, /ping, /status
ratelimiter.SkipFuncs.SkipStaticAssets       // Skip .css, .js, .png, etc.

Statistics

type Stats struct {
    TotalRequests int64         // Total requests processed
    TotalAllowed  int64         // Total requests allowed
    TotalLimited  int64         // Total requests rate limited
    CurrentTokens float64       // Current token count (token bucket)
    MaxTokens     float64       // Maximum token capacity
    LastRequest   time.Time     // Timestamp of last request
}

stats := limiter.Stats()
fmt.Printf("Success rate: %.1f%%\n",
    float64(stats.TotalAllowed)/float64(stats.TotalRequests)*100)

Performance Benchmarks

Real-world performance on 8-core system:

Algorithm              Ops/sec     Latency     Allocs
Token Bucket        11,902,159    103.9ns      0
Leaky Bucket        10,768,921    111.3ns      0
Fixed Window         5,578,986    213.7ns      0
Sliding Window       4,047,263    299.7ns      0

Performance Characteristics

  • ⚑ Latency: <100ns for hot paths, <300ns worst case
  • 🎯 Throughput: 4-12M operations per second
  • 🧠 Memory: Zero allocations, minimal heap usage
  • πŸ”’ Concurrency: Thread-safe with atomic operations
  • πŸ“Š Scalability: Linear performance scaling

Redis Performance

  • Network Latency: ~1-5ms per operation
  • Lua Scripting: Atomic operations via Redis scripts
  • Connection Pooling: Efficient resource utilization
  • Cluster Support: Automatic failover and load balancing

Contributing

We welcome contributions! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for new functionality
  4. Ensure all tests pass (go test ./...)
  5. Run benchmarks to ensure performance is maintained
  6. Submit a pull request

Development Setup

# Clone and setup
git clone https://github.com/erfanmomeniii/ratelimiter.git
cd ratelimiter

# Install dependencies
go mod download

# Run tests and benchmarks
go test ./...
go test -bench=. ./...

# Run linter
golangci-lint run

License

MIT License - see LICENSE file for details.

About

Production-hardened rate limiting library for Go. Multiple algorithms, Redis distributed coordination, circuit breakers, and enterprise observability.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors