-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
OpenGemini Connector Performance Test Report
Test Environment
- Flink Version: 1.17.2
- OpenGemini Version: 1.4.1
- Test Date: 2025-08-04
Hardware Configuration
- CPU: AMD Ryzen 9 7945HX (16 cores, 32 threads)
- Memory: 16GB allocated to WSL2
- OS: Windows 11 with WSL2
- Network: Localhost connection
- Storage: NVMe SSD
Test Methodology
Test Parameters
- Batch Sizes: 5000, 10000, 15000
- Flush Intervals: 300ms, 500ms, 1000ms
- Parallelism: 1, 2, 4, 8, 10, 12, 14, 16
- Target RPS: 10000, 50000, 100000, 500000
- Test Duration: 120 seconds per configuration
- Record Size: ~132 bytes (line protocol format)
Measurement Approach
- Each configuration tested for stable throughput
- Metrics collected every 5 seconds
- Both instantaneous and average throughput recorded
Performance Results
Throughput Observations
Parallelism Scaling:
- Linear scaling observed from parallelism 1 to 4
- Near-linear scaling from parallelism 4 to 8
- Throughput plateaued around parallelism 10-14
- Maximum sustained throughput: 621,727 records/sec (78.26 MB/s) at parallelism 10
Configuration Impact:
- Batch size 5000 with flush interval 300ms showed stable performance
- Larger batch sizes (10000-15000) showed marginal improvements
- Flush interval variations (300-1000ms) had minimal impact on peak throughput
Performance Progression
| Stage | Configuration Change | Observed Throughput |
|---|---|---|
| Initial Testing | Default settings | 60K RPS (7.2 MB/s) |
| Remove Source Throttling | Eliminated sleep() in DataGenerator | 120K RPS (14.4 MB/s) |
| Infrastructure Optimization | Docker → Native WSL deployment | ~240K RPS |
| Parallelism Tuning | Increased to 10 parallel instances | 621K RPS (78.26 MB/s) |
System Behavior
Observed Characteristics
- Backpressure: 0% throughout all tests, indicating sink efficiency
- Stability: Consistent throughput maintained over 120-second test runs
- Resource Utilization: Performance plateaued despite available resources
Identified Bottlenecks
- Write Throughput Ceiling: System reached a consistent limit around 650K records/sec
- WSL2 Network Stack: Introduced overhead compared to native Linux expectations
- Infrastructure Limits: Performance bounded by OpenGemini write capacity rather than connector implementation
Test Configuration Example
batch.size = 5000
flush.interval = 300ms
source.parallelism = 10
sink.parallelism = 10
Key Findings
- The connector demonstrated linear scalability within system constraints
- No backpressure observed, confirming efficient sink design
- Achieved sustained throughput of 78.26 MB/s
- Performance ceiling appears to be infrastructure-related rather than connector-limited
- Further optimization would require infrastructure-level changes
Test Artifacts
- Benchmark Script:
run_benchmark.sh - Test Results:
benchmark-results-*/ - Source Code:
flink-connector-opengemini
Notes
- All tests conducted in WSL2 environment
- Native Linux deployment expected to show improved performance
- Comprehensive parameter space exploration still ongoing
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels