Professional Guide for AI Programming Assistants - Best Practices for ShardingSphere Code Development
ShardingSphere is an ecosystem of distributed database solutions with JDBC driver, database proxy, and planned Sidecar modes.
module_hierarchy:
infrastructure_layer:
- shardingsphere-infra: Common utilities, SPI definitions
- shardingsphere-parser: SQL parsing (ANTLR4-based)
engine_layer:
- shardingsphere-mode: Configuration management
- shardingsphere-kernel: Core execution engine
access_layer:
- shardingsphere-jdbc: Java JDBC driver
- shardingsphere-proxy: Database proxy
feature_layer:
- shardingsphere-sharding: Data sharding
- shardingsphere-encryption: Data encryption
- shardingsphere-readwrite-splitting: Read/write splitting- ANTLR4: SQL parsing and abstract syntax tree generation
- Netty: High-performance network communication (proxy mode)
- Apache Calcite: Query optimization and execution plans
- SPI: Plugin architecture for hot-pluggable extensions
- JDBC: Zero invasion, Java-only, highest performance
- Proxy: Language-agnostic, centralized management, advanced features
- Sharding: Horizontal data partitioning
- DistSQL: Distributed SQL for dynamic configuration
- SPI Extension: Algorithm, protocol, and execution extensions
- Data Pipeline: Migration and synchronization functionality
self_documenting_code:
method_naming: "10-15 characters, verb-noun patterns, no comments needed"
examples: ["isValidEmailAddress()", "calculateOrderTotal()"]
anti_examples: ["proc()", "getData()", "handle()"]
complex_logic:
definition: "3+ nested levels or 20+ lines per method"
handling: "Extract to meaningful private methods"
mock_boundaries:
no_mock: "Simple objects, DTOs, stateless utilities"
must_mock: "Database connections, network services, third-party interfaces"
judgment: "Mock only with external dependencies or high construction cost"Please implement [feature description] for [class name], requirements:
1. Follow ShardingSphere project coding standards and constraints
2. Use self-documenting programming, no comments
3. Extract complex logic into private methods
4. 100% test coverage unit tests
5. Pass spotless code formatting checks
6. Use @RequiredArgsConstructor constructor injection
Basic Style-Consistent Testing:
Please write unit tests for [class name], requirements:
1. Use ShardingSphere project testing style
2. Test method naming with assert*() prefix
3. Use Hamcrest assertion style assertThat(actual, is(expected))
4. Use Mockito for Mocking, follow project boundary principles
5. Maintain clear Given-When-Then structure
Complex Tests for First-Pass Success:
Please write complete unit tests for [complex class name], requirements:
1. First analyze the dependency relationships and complexity of the class under test
2. Identify all external dependencies that need Mocking
3. Gradually build test fixtures, ensure complete Mock chains
4. Write corresponding test methods for each branch
5. Use @BeforeEach to set up common Mocks
6. Use try-with-resources to manage MockedConstruction
7. Ensure all tests can run independently and pass
If you encounter uncertain dependency relationships, please ask me for confirmation.
100% Coverage Testing:
Please implement 100% test coverage for [specific class name] in shardingsphere-[module] module:
1. First generate coverage report to check current status:
./mvnw clean test jacoco:report -Djacoco.skip=false -pl [submodule]
open [submodule]/target/site/jacoco/index.html
2. Identify all branches that need testing (red diamond markers)
3. Write multiple sets of test data for complex conditions
4. Ensure all exception paths have tests
5. Verify final coverage reaches 100%:
./mvnw test jacoco:check@jacoco-check -Pcoverage-check -Djacoco.check.class.pattern=[ClassName] -pl [submodule]
If dead code or uncovered branches are found, please explain in detail.
Special Case Handling Testing:
Please write unit tests for [class name], requirements:
1. Make every effort to achieve 100% coverage
2. If you encounter the following situations, please report to me:
- Truly unreachable dead code (e.g., never-thrown exceptions)
- Functions dependent on specific runtime environments (e.g., OS-specific functions)
- Features requiring special hardware or network conditions
- Protective programming code for extreme cases
Report format:
- Code location: [class name:line number]
- Uncoverage reason: [detailed explanation]
- Suggested solution: [if any]
Let me confirm before skipping coverage requirements for these codes.
SQL Generation Class Testing:
Please write comprehensive tests for [SQLGeneratorClass] in [module], requirements:
1. Use efficient test development workflow:
- Analyze existing test Mock patterns first (DatabaseTypedSPILoader + TypedSPILoader)
- Write most complex scenario test first to verify feasibility
- Run test to get actual SQL output, then correct expected values
- Batch copy verified patterns to other simple scenarios
2. SQL syntax verification:
- For Oracle: verify MERGE INTO, ROWNUM, NVL syntax formats
- For MySQL: verify LIMIT, IFNULL, REPLACE syntax formats
- Use database-specific official docs for syntax validation
3. Mock dependency reuse:
- 100% reuse existing test Mock configuration patterns
- Avoid redesigning Mock chains for SPI loaders
- Use DatabaseTypedSPILoader.getService(DialectPipelineSQLBuilder.class, databaseType) pattern
4. Branch coverage strategy:
- Merge simple methods into single tests where possible
- Focus independent tests on truly complex conditional branches
- Ensure each boolean/enum branch has at least one dedicated test
5. Quality assurance:
- Run complete test suite in one batch after all tests written
- Use Jacoco to verify 100% branch coverage
- Minimize iterative modifications during development
- Provide complete context: Tell me the specific class path, module information, and related dependencies
- Clarify complexity: If the class is particularly complex, specify which part to test first
- Progressive development: For complex features, request step-by-step implementation
- Quality verification: Ask me to run actual test commands to verify pass rates
- Pattern Analysis First: Always analyze existing test patterns before writing new tests
- Validation-Driven Development: Write tests to discover actual behavior before defining expectations
- Mock Reuse Principle: Never redesign Mock configurations when existing patterns work
- Branch Efficiency: Focus on conditional branches, not method count, for test coverage
- Batch Validation: Run complete test suites at once, avoid frequent incremental checks
- AI-First Principle: All content is oriented towards AI programming assistants, not human developers
- Actionability Priority: Every instruction must be directly executable by AI, avoid theoretical descriptions
- Unambiguous Expression: Use clear instructions and parameterized templates, avoid vague expressions
- Search Efficiency Priority: Information organization facilitates AI quick positioning, reduces understanding cost
- Accuracy Priority: All information must be correct and verifiable, no exaggeration or beautification
- Practicality Priority: Focus on actual effects, avoid exaggerated expressions
- Problem-Oriented: Directly address problem essence, provide actionable solutions
- Concise and Clear: Use most direct language to express core information
- In-depth Analysis: Conduct deep analysis based on code logic and project specifications, avoid surface-level answers
- Factual and Realistic: Present facts and data, do not exaggerate achievements, do not hide problems
- Reasoned Debate: When facing programmer's misconceptions, engage in well-reasoned debates based on technical standards and best practices
- Timely Correction: When facing AI inference errors, immediately admit mistakes and correct them to maintain technical accuracy
- "Create rule change processor" → Code Templates.Rule Change Processor Template
- "Write test methods" → Code Templates.Test Method Template
- "Mock external dependencies" → Code Templates.Mock Configuration Template
- "Coverage check" → Quick Commands Reference.Validation Commands
- "Format code" → Quick Commands Reference.Validation Commands
- "Test style requirements" → AI Programming Best Practices.Unit Test Request Templates
task_type_detection:
if contains(["src/main/java"], ["*.java"]): "source_code_task"
if contains(["src/test/java"], ["*Test.java"]): "test_task"
if contains(["*.md"], ["docs/"]): "documentation_task"
validation_rules:
source_code_task: ["100% test coverage", "code formatting", "self-documenting"]
test_task: ["branch coverage", "mock configuration", "assertion correctness"]
doc_task: ["link validity", "format consistency"]
decision_logic:
if task_type == "source_code": apply source code task workflow
if task_type == "test": apply test task workflow
if involves_external_dependencies: use MockedConstruction template./mvnw install -T1C # Full build with parallel execution
./mvnw install -T1C -DskipTests # Build without tests
./mvnw clean compile # Compile only./mvnw spotless:apply -Pcheck # Format code
./mvnw checkstyle:check # Code style checking
./mvnw pmd:check # Static code analysis
./mvnw spotbugs:check # Bug detection
./mvnw dependency-check # Security vulnerability scan
./mvnw archunit:test # Architecture rule validation./mvnw test # Run all tests
./mvnw test -Dtest=${TestClassName} # Run specific test class
./mvnw test -pl ${submodule} # Run tests for specific module
./mvnw test jacoco:report -Djacoco.skip=false -pl ${submodule} # Generate coverage report
./mvnw test jacoco:check@jacoco-check -Pcoverage-check -Djacoco.skip=false \
-Djacoco.check.class.pattern=${ClassName} -pl ${submodule} # Coverage check
# Performance Testing
./mvnw jmh:benchmark # Run performance benchmarkspackage org.apache.shardingsphere.${module}.rule.changed;
import org.apache.shardingsphere.infra.algorithm.core.processor.AlgorithmChangedProcessor;
import org.apache.shardingsphere.mode.spi.rule.RuleChangedItemType;
import org.apache.shardingsphere.${module}.api.config.${RuleType}RuleConfiguration;
import org.apache.shardingsphere.${module}.rule.${RuleType}Rule;
import java.util.Map;
/**
* ${AlgorithmType} algorithm changed processor.
*/
public final class ${AlgorithmType}AlgorithmChangedProcessor extends AlgorithmChangedProcessor<${RuleType}RuleConfiguration> {
public ${AlgorithmType}AlgorithmChangedProcessor() {
super(${RuleType}Rule.class);
}
@Override
protected ${RuleType}RuleConfiguration createEmptyRuleConfiguration() {
return new ${RuleType}RuleConfiguration();
}
@Override
protected Map<String, AlgorithmConfiguration> getAlgorithmConfigurations(final ${RuleType}RuleConfiguration currentRuleConfig) {
return currentRuleConfig.get${AlgorithmType}Algorithms();
}
@Override
public RuleChangedItemType getType() {
return new RuleChangedItemType("${ruleType}", "${algorithmType}_algorithms");
}
}@Test
void assert${MethodName}With${Condition}Expects${Result}() {
// Given
${MockSetup}
// When
${ActualCall}
// Then
assertThat(${actual}, is(${expected}));
}Mock Usage Boundaries:
- No Mock: Simple objects, DTOs, stateless utilities, configuration objects
- Must Mock: Database connections, network services, third-party interfaces, SPI services
- Judgment: Mock only with external dependencies or high construction cost
Basic Mock Patterns:
// Interface method Mock
when(dependency.method(any())).thenReturn(result);
// Constructor Mock with MockedConstruction
try (MockedConstruction<ClassName> mocked = mockConstruction(ClassName.class)) {
// Test code involving new ClassName()
}Advanced Mock Patterns:
// Static method Mocking (avoid UnfinishedStubbingException)
@SneakyThrows(SQLException.class)
private static Array createMockArray(final Object data) {
Array result = mock(Array.class);
doReturn(data).when(result).getArray();
return result;
}
// Deep stubs for complex dependencies
@Mock(answer = Answers.RETURNS_DEEP_STUBS)
private ComplexService complexService;
// MockedStatic for static method calls
try (MockedStatic<UtilityClass> mocked = mockStatic(UtilityClass.class)) {
when(UtilityClass.staticMethod(any())).thenReturn(value);
// Test code
}Example Comparison:
// ❌ Over-mocking simple objects
String result = mock(String.class); // Unnecessary
// ✅ Direct creation for simple objects
String result = "testValue";
// ✅ Mock external dependencies
when(dataSource.getConnection()).thenReturn(mockConnection);package org.apache.shardingsphere.${module}.spi;
@TypedSPI
public final class ${SPIName}Impl implements ${SPIName}SPI {
@Override
public ${ResultType} execute(${ContextType} context) {
// Implementation logic
return ${result};
}
@Override
public String getType() {
return "${type}";
}
}- Test Classes:
*Test.javasuffix - Test Methods:
assert*()prefix with descriptive naming- Examples:
assertConnectWithInvalidURL(),assertDriverWorks(),assertLoadEmptyConfiguration()
- Examples:
- Integration Tests:
*IT.javasuffix
- Framework: Mockito + Mockito Extension
- Annotations:
@Mock,@InjectMocks,@ExtendWith(MockitoExtension.class) - Deep Stubs:
@Mock(answer = Answers.RETURNS_DEEP_STUBS) - Constructor Mocks:
MockedConstructionfor complex objects - Boundary Principle: Direct creation for simple objects, Mock only complex dependencies
- Primary: Hamcrest matchers
- Pattern:
assertThat(actual, is(expected)) - Custom:
ShardingSphereAssertionMatchers.deepEqual()for deep equality comparisons
- Single Responsibility: Each test method focuses on one scenario
- Given-When-Then: Clear three-part structure
- Independence: Complete isolation between tests
- Resource Management:
try-with-resourcesfor Mock resource management
- Target: 100% branch coverage
- Focus: Algorithm execution paths, boundary conditions, exception handling
- Method: Independent testing of each conditional branch
- Driver Testing: JDBC driver registration and functionality
- Connection Testing: Connection pooling and state management
- Adapter Testing: JDBC adapter implementations
- Configuration Testing: Proxy configuration loading
- Protocol Testing: Database protocol implementations
- Handler Testing: Request/response handlers
- Algorithm Testing: Core algorithms (sharding, encryption, etc.)
- Rule Testing: Business rule implementations
- Pipeline Testing: Data pipeline operations
- Multi-threaded Tests: Thread safety validation
- Async Testing: Asynchronous operation testing with Awaitility
- Race Condition Testing: Concurrent access scenarios
- YAML Integration: Configuration serialization/deserialization
- SPI Integration: Service provider interface testing
- Database Integration: Mocked database interactions for metadata testing
- Analyze Task → Identify as source code task
- Coverage Analysis → Use JaCoCo to find uncovered branches
- Design Implementation → Apply templates from Code Templates
- Verify Coverage → Run tests to ensure 100% coverage
- Format Code → Apply spotless formatting
- Complete Validation → Ensure all quality checks pass
- Analyze Test Scenarios → Identify branches that need testing
- Mock Configuration → Use Mock Configuration Template
- Write Tests → Apply Test Method Template
- Verify Coverage → Ensure complete branch coverage
- Assertion Validation → Use correct assertion patterns
Phase 1: Comprehensive Analysis (One-time)
Task agent analysis should include:
- Complete class structure and method listing
- Complexity and branch count for each method
- All existing test patterns and configurations
- Dependency relationships and Mock requirements
- Identification of any non-coverable code
Phase 2: Validation-First Development
- Select most complex scenario and write one test first
- Verify Mock configuration and syntax expectations by running the test
- Batch copy verified patterns to other simple scenarios
- Write dedicated tests only for truly complex conditional branches
Phase 3: Quality Assurance
- Run complete test suite and coverage checks in one batch
- Avoid frequent iterative modifications
- Use Jacoco to verify 100% branch coverage
### Documentation Task Steps
1. **Content Review** → Check accuracy and formatting
2. **Link Validation** → Ensure all links are valid
3. **Format Check** → Unify markdown format
4. **Complete Validation** → Ensure documentation quality standards
## 📋 Project Constraint Rules
### Core Design Principles
```yaml
class_design:
- final classes with final fields
- constructor injection only
- @RequiredArgsConstructor for dependencies
- self-documenting code (no comments)
package_structure:
service: "org.apache.shardingsphere.{module}.service"
spi: "org.apache.shardingsphere.{module}.spi"
config: "org.apache.shardingsphere.{module}.config"
util: "org.apache.shardingsphere.{module}.util"
// Self-documenting pattern
if (isValidUserWithPermission()) {
processPayment();
}
private boolean isValidUserWithPermission() {
return user.isValid() && user.hasPermission();
}
// Test structure
@Test
void assertMethodWithConditionExpectsResult() {
// Given
mockDependencies();
// When
Result actual = target.method(input);
// Then
assertThat(actual, is(expected));
}- Test Coverage: 100% branch coverage
- Code Formatting: Spotless applied
- Mock Strategy: Mock only external dependencies
- Naming: Test methods use assert*() prefix
quick_search_index:
"Create rule change processor":
target: "Code Templates.Rule Change Processor Template"
description: "Create rule change processor class"
"Write test methods":
target: "Code Templates.Test Method Template"
description: "Write unit test methods"
"Mock external dependencies":
target: "Code Templates.Mock Configuration Template"
description: "Configure external dependency Mock"
"Coverage check":
target: "Quick Commands Reference.Validation Commands"
description: "Run test coverage check"
"Format code":
target: "Quick Commands Reference.Validation Commands"
description: "Apply code formatting"
"Test style requirements":
target: "AI Programming Best Practices.Unit Test Request Templates"
description: "View testing style requirements"
"Naming rules":
target: "Project Constraint Rules.class_design.naming_conventions"
description: "View naming conventions"
"Package structure":
target: "Project Constraint Rules.package_naming"
description: "View package naming rules"
"Quality issues":
target: "Troubleshooting Guide"
description: "Solve common quality issues"
"ShardingSphere test style":
target: "ShardingSphere Testing Style Guide"
description: "Complete project testing style guide"
error_recovery_index:
"Coverage not met":
solution: "Check Mock configuration, add branch tests"
reference: "Code Templates.Mock Configuration Template"
"Compilation errors":
solution: "Check dependencies and syntax"
reference: "Quick Commands Reference.Build Commands"
"Format errors":
solution: "Run spotless formatting"
reference: "Quick Commands Reference.Validation Commands"
"Test failures":
solution: "Check Mock configuration and assertion logic"
reference: "Code Templates.Test Method Template"
"Complex mock setup":
solution: "Use Mock boundary judgment and complex dependency handling"
reference: "ShardingSphere Testing Style Guide.Mock Usage Patterns"- Issue: Expected SQL assertion doesn't match actual generated SQL
- Solution: Run test first to get actual output, then correct expected values
- Prevention: Use database official documentation to verify syntax formats
- Oracle Common Issues: MERGE INTO ON clause format, ROWNUM positioning, NVL parameter order
- Issue: Over-engineering Mock configurations for SPI loaders
- Solution: 100% reuse existing test Mock patterns instead of redesigning
- Pattern: Use
DatabaseTypedSPILoader.getService(DialectPipelineSQLBuilder.class, databaseType)+TypedSPILoader.getService(DatabaseType.class, "Oracle")combination - Prevention: Analyze existing tests completely before writing new ones
- Issue: Writing too many granular tests for simple methods
- Solution: Merge simple method tests, focus independent tests on complex branches only
- Guideline: One test per conditional branch, not one test per method
- Example: Test all simple SQL formatting methods in one focused test
- Issue: Mock configuration incomplete, branches not executed
- Solution: Use MockedConstruction, create dedicated test methods for each branch
- Command:
./mvnw clean test jacoco:report -pl ${submodule}
- Issue: UnfinishedStubbingException in static methods
- Solution: Use
doReturn().when()instead ofwhen().thenReturn() - Pattern:
@SneakyThrows(SQLException.class) private static Array createMockArray()
- Issue: Mock dependency chain broken
- Solution: Verify complete dependency chain, use RETURNS_DEEP_STUBS
- Check: Mock calls with
verify(mock).method(params)
- Issue: Dependency conflicts, syntax errors
- Solution: Check versions, verify imports, run
./mvnw dependency:tree
# Generate coverage report
./mvnw clean test jacoco:report -Djacoco.skip=false -pl ${submodule}
# View coverage details
open ${submodule}/target/site/jacoco/index.html
# Check dependencies
./mvnw dependency:tree- Task type identified (source/test/docs)
- Quality requirements understood
- Relevant templates found
- Source: 100% coverage + formatting + self-documenting
- Test: Complete branch coverage
- Docs: Valid links + consistent format
- All: Project constraints satisfied
- Build:
./mvnw install -T1C - Coverage:
./mvnw test jacoco:check@jacoco-check -Pcoverage-check - Format:
./mvnw spotless:apply -Pcheck