Implementation of rate limiter for GoFrame, supporting both in-memory and Redis-based rate limiting strategies.
- ✅ Multiple Implementations: Supports both in-memory and Redis storage backends
- ✅ Sliding Window: Redis uses precise sliding window algorithm
- ✅ Concurrency Safe: In-memory version uses CAS atomic operations, Redis version uses Lua scripts
- ✅ Flexible Configuration: Supports custom key generation and error handling
- ✅ Standardized: Response headers compliant with RFC 6585
- ✅ High Performance: No extra overhead for in-memory version, atomic operations for Redis version
go get github.com/LanceAdd/glimiter@latestimport "github.com/LanceAdd/glimiter"
// Create limiter: up to 100 requests per minute
limiter := glimiter.NewMemoryLimiter(100, time.Minute)
// Check if request is allowed
allowed, err := limiter.Allow(ctx, "user:123")
if !allowed {
// Request is rate limited
}s := g.Server()
limiter := glimiter.NewMemoryLimiter(100, time.Minute)
s.Group("/api", func(group *ghttp.RouterGroup) {
group.Middleware(glimiter.MiddlewareByIP(limiter))
group.GET("/users", handler)
})limiter := glimiter.NewMemoryLimiter(1000, time.Hour)
s.Group("/api", func(group *ghttp.RouterGroup) {
group.Middleware(glimiter.MiddlewareByAPIKey(limiter, "X-API-Key"))
group.GET("/data", handler)
})limiter := glimiter.NewMemoryLimiter(50, time.Minute)
s.Group("/api", func(group *ghttp.RouterGroup) {
group.Middleware(glimiter.Middleware(glimiter.MiddlewareConfig{
Limiter: limiter,
KeyFunc: func(r *ghttp.Request) string {
// Custom key: combine IP and User-Agent
return r.GetClientIp() + ":" + r.UserAgent()
},
ErrorHandler: func(r *ghttp.Request) {
r.Response.WriteStatus(429)
r.Response.WriteJson(g.Map{
"error": "Rate limit exceeded",
"retry": time.Now().Add(time.Minute).Unix(),
})
},
}))
group.GET("/resource", handler)
})import (
"github.com/LanceAdd/glimiter"
"github.com/gogf/gf/v2/database/gredis"
)
// Create Redis connection
redis, err := gredis.New(&gredis.Config{
Address: "127.0.0.1:6379",
})
// Create Redis limiter
limiter := glimiter.NewRedisLimiter(redis, 100, time.Minute)
// Usage is the same as in-memory limiter
allowed, err := limiter.Allow(ctx, "user:123")type Limiter interface {
// Check if a single request is allowed
Allow(ctx context.Context, key string) (bool, error)
// Check if N requests are allowed
AllowN(ctx context.Context, key string, n int) (bool, error)
// Block until request is allowed
Wait(ctx context.Context, key string) error
// Get limit configuration
GetLimit() int
GetWindow() time.Duration
// Get remaining quota
GetRemaining(ctx context.Context, key string) (int, error)
// Reset limit
Reset(ctx context.Context, key string) error
}Set multiple layers of restrictions for different time windows to prevent burst traffic and long-term abuse:
// Layer 1: Burst protection (per second)
burstLimiter := glimiter.NewMemoryLimiter(10, time.Second)
// Layer 2: Normal limit (per minute)
normalLimiter := glimiter.NewMemoryLimiter(100, time.Minute)
// Layer 3: Long-term limit (per hour)
hourlyLimiter := glimiter.NewMemoryLimiter(1000, time.Hour)
s.Group("/api", func(group *ghttp.RouterGroup) {
group.Middleware(
glimiter.MiddlewareByIP(burstLimiter),
glimiter.MiddlewareByIP(normalLimiter),
glimiter.MiddlewareByIP(hourlyLimiter),
)
group.GET("/search", handler)
})Different API routes use different rate limiting strategies:
s := g.Server()
// Public API: loose restriction
s.Group("/public", func(group *ghttp.RouterGroup) {
publicLimiter := glimiter.NewMemoryLimiter(100, time.Minute)
group.Middleware(glimiter.MiddlewareByIP(publicLimiter))
group.GET("/info", handler)
})
// Authenticated API: moderate restriction
s.Group("/auth", func(group *ghttp.RouterGroup) {
authLimiter := glimiter.NewMemoryLimiter(5, time.Minute)
group.Middleware(glimiter.MiddlewareByIP(authLimiter))
group.POST("/login", handler)
})
// Sensitive operations: strict restriction
s.Group("/admin", func(group *ghttp.RouterGroup) {
adminLimiter := glimiter.NewMemoryLimiter(10, time.Hour)
group.Middleware(glimiter.MiddlewareByIP(adminLimiter))
group.POST("/delete", handler)
})limiter := glimiter.NewMemoryLimiter(1000, time.Hour)
middleware := glimiter.MiddlewareByUser(limiter, func(r *ghttp.Request) string {
// Get user ID from context
user := r.GetCtxVar("user").String()
return user
})
s.Group("/api", func(group *ghttp.RouterGroup) {
group.Middleware(middleware)
group.GET("/profile", handler)
})Using the limiter directly in business code without middleware:
limiter := glimiter.NewMemoryLimiter(10, time.Minute)
func ProcessTask(ctx context.Context, taskID string) error {
// Check if processing is allowed
allowed, err := limiter.Allow(ctx, "task:"+taskID)
if err != nil {
return err
}
if !allowed {
return errors.New("rate limit exceeded")
}
// Execute task
return doTask(taskID)
}limiter := glimiter.NewMemoryLimiter(5, time.Second)
func SendRequest(ctx context.Context) error {
// Block until quota is available
if err := limiter.Wait(ctx, "api-call"); err != nil {
return err
}
// Send request
return makeAPICall()
}The rate limiting middleware automatically sets the following HTTP response headers:
| Header | Description |
|---|---|
X-RateLimit-Limit |
Maximum requests in the time window |
X-RateLimit-Remaining |
Remaining available requests |
X-RateLimit-Reset |
Rate limit reset time (Unix timestamp) |
- Short time windows (within 1 minute): Suitable for in-memory limiters, optimal performance
- Long time windows (over 1 hour): Recommended to use Redis limiters, support distributed environments
Combine rate limiting strategies for different time windows to prevent both burst traffic and long-term abuse:
- Layer 1: Second-level rate limiting to prevent burst attacks
- Layer 2: Minute-level rate limiting for regular usage restrictions
- Layer 3: Hour-level rate limiting for long-term quota management
Set different rate limiting strategies according to the sensitivity and importance of APIs:
- Public APIs: Loose restrictions for good user experience
- Authenticated APIs: Moderate restrictions to prevent brute force attacks
- Sensitive operations: Strict restrictions to protect critical functions
Customize error handlers to inform users when they can retry:
ErrorHandler: func(r *ghttp.Request) {
r.Response.WriteStatus(429)
r.Response.WriteJson(g.Map{
"error": "Rate limit exceeded",
"message": "You have exceeded the rate limit. Please try again later.",
"retry_after": limiter.GetWindow().Seconds(),
})
}In production environments, it is recommended to monitor limiter usage:
// Periodically check remaining quota
remaining, _ := limiter.GetRemaining(ctx, key)
if remaining < 10 {
// Send alert
log.Warn("Rate limit nearly exhausted", "key", key, "remaining", remaining)
}- Advantages: Extremely high performance, no network overhead
- Disadvantages: Single-machine limitation, not suitable for distributed environments
- Applicable: Monolithic applications, short time windows
- Advantages: Supports distributed environments, data persistence
- Disadvantages: Network latency overhead
- Applicable: Distributed applications, long time windows
Uses CAS (Compare-And-Swap) atomic operations to ensure concurrency safety:
Uses Lua scripts to ensure atomic operations:
A: Use RedisLimiter instead of MemoryLimiter, ensuring all service instances share the same Redis.
A: New Limiter instances can be created at runtime and middleware configuration updated.
A:
- In-memory limiter: Uses gcache's automatic expiration feature
- Redis limiter: Uses sliding window algorithm for precise time range control
A:
- Use in-memory limiters for short time windows
- Reasonably set rate limiting quotas
- Use multi-layer rate limiting instead of single strict restrictions