Purpose: Step-by-step guide for adopting centralized rules in your project Timeline: 8 weeks (flexible based on project size and maturity) Audience: Engineering teams, AI assistants, tech leads
This guide provides a practical, phased approach to implementing the centralized rules framework in your project. The rollout is designed to minimize disruption while maximizing value.
- Progressive Enhancement: Start with high-impact, low-effort practices
- Measure Success: Track metrics before and after each phase
- Team Buy-In: Include team in decision-making
- AI-Assisted: Leverage AI assistants for implementation
- Iterative: Review and adjust based on feedback
- Phase 1 (Weeks 1-2): Foundation - Essential workflow and quality standards
- Phase 2 (Weeks 3-4): Quality & Testing - Comprehensive testing and code quality
- Phase 3 (Weeks 5-6): Architecture & Security - Solid foundations for growth
- Phase 4 (Weeks 7-8): Advanced Practices - AI, observability, and optimization
Establish core development workflow and quality standards that provide immediate value.
Day 1-2: Git Workflow Setup
-
Implement Conventional Commits
# Install commitlint npm install --save-dev @commitlint/{cli,config-conventional} # Configure echo "module.exports = { extends: ['@commitlint/config-conventional'] };" > commitlint.config.js
Reference:
base/git-workflow.md -
Set Up Branch Protection
- Require pull requests for main branch
- Require 1 approval before merge
- Require status checks to pass
-
Create PR Template
## Description Brief description of changes ## Type of Change - [ ] Bug fix - [ ] New feature - [ ] Breaking change - [ ] Documentation update ## Testing - [ ] Tests pass locally - [ ] Added new tests - [ ] Updated documentation ## Checklist - [ ] Code follows style guidelines - [ ] Self-reviewed code - [ ] Commented complex areas - [ ] No console.log/debugger statements
Day 3-5: Code Quality Tools
-
Set Up Linting
TypeScript/JavaScript:
// .eslintrc.json { "extends": [ "eslint:recommended", "plugin:@typescript-eslint/recommended" ], "rules": { "no-console": "warn", "no-debugger": "error" } }
Python:
# pyproject.toml [tool.ruff] line-length = 100 select = ["E", "F", "I", "N"]
Reference: Language-specific files in
languages/ -
Set Up Formatting
- TypeScript: Prettier
- Python: Black
- Java: Spotless
- C#: dotnet format
-
Pre-commit Hooks
# .pre-commit-config.yaml repos: - repo: https://github.com/pre-commit/pre-commit-hooks hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml - id: check-json
Success Criteria:
- ✅ All commits follow conventional commits format
- ✅ PRs require approval
- ✅ Linting passes on all new code
- ✅ Pre-commit hooks prevent bad commits
Day 1-3: Testing Setup
-
Choose Testing Framework
- TypeScript: Vitest or Jest
- Python: pytest
- Java: JUnit 5
- C#: xUnit
-
Write First Tests
// Example: Start with critical business logic describe('UserService', () => { it('should create user with valid email', () => { const user = UserService.create({ email: 'test@example.com', name: 'Test User' }); expect(user).toBeDefined(); expect(user.email).toBe('test@example.com'); }); });
Reference:
base/testing-philosophy.md -
Set Coverage Baseline
# Measure current coverage npm test -- --coverage # Goal: Start where you are, improve by 5% each sprint
Day 4-5: Basic CI/CD
-
GitHub Actions Workflow
# .github/workflows/ci.yml name: CI on: pull_request: branches: [main] push: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Node uses: actions/setup-node@v3 with: node-version: '18' - name: Install dependencies run: npm ci - name: Run linters run: npm run lint - name: Run tests run: npm test - name: Check coverage run: npm test -- --coverage
Reference:
base/cicd-comprehensive.md
Success Criteria:
- ✅ Test framework configured
- ✅ At least 10 tests written
- ✅ CI pipeline running on every PR
- ✅ Tests pass before merge
- Conventional commits enforced
- Branch protection rules active
- PR template in use
- Linting configured and passing
- Formatting automated
- Pre-commit hooks working
- Testing framework set up
- CI/CD pipeline running
- Team trained on new workflow
Track these metrics before and after Phase 1:
- Commit quality: % of commits following convention (Target: 95%+)
- PR review time: Average time from creation to merge (Target: < 24 hours)
- Build failures: % of builds that fail (Target: < 10%)
- Test count: Number of tests (Target: Increase by 20%)
Establish comprehensive testing practices and improve code quality metrics.
Day 1-3: Increase Test Coverage
-
Test Critical Paths
- User authentication flows
- Payment processing
- Data persistence
- API endpoints
-
Add Integration Tests
// Example integration test describe('User API Integration', () => { it('should create and retrieve user', async () => { const response = await request(app) .post('/api/users') .send({ email: 'test@example.com', name: 'Test' }); expect(response.status).toBe(201); const userId = response.body.id; const getResponse = await request(app) .get(`/api/users/${userId}`); expect(getResponse.status).toBe(200); expect(getResponse.body.email).toBe('test@example.com'); }); });
-
Coverage Gates
# In CI/CD - name: Check coverage threshold run: | npm test -- --coverage COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct') if (( $(echo "$COVERAGE < 60" | bc -l) )); then echo "Coverage $COVERAGE% is below 60%" exit 1 fi
Target Coverage:
- MVP/POC: 40%+
- Pre-Production: 60%+
- Production: 80%+
Reference:
base/project-maturity-levels.md
Day 4-5: Code Quality Metrics
-
SonarQube or Similar
# SonarCloud integration - name: SonarCloud Scan uses: SonarSource/sonarcloud-github-action@master env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
-
Quality Gates
- Complexity: Cyclomatic complexity < 10 per function
- Duplication: < 3% code duplication
- Maintainability: Maintainability rating A or B
Reference:
base/code-quality.md
Success Criteria:
- ✅ Coverage increased by 20%
- ✅ All critical paths have tests
- ✅ Quality gates pass in CI
- ✅ No code smells in new code
Day 1-3: Address Technical Debt
-
Identify High-Priority Debt
- Use SonarQube/CodeClimate to identify issues
- Prioritize by business impact and effort
-
Refactor Incrementally
// Example: Extract method refactoring // ❌ Before: Long method function processOrder(order) { // 100 lines of code doing everything } // ✅ After: Extracted methods function processOrder(order) { validateOrder(order); calculateTotal(order); applyDiscounts(order); saveOrder(order); sendConfirmation(order); }
Reference:
base/refactoring-patterns.md -
Document Decisions
# ADR 001: Extract Order Processing Logic ## Status Accepted ## Context Order processing code was in a single 300-line method, making it hard to test and maintain. ## Decision Extract order processing into separate functions with single responsibilities. ## Consequences - Improved testability (can test each step independently) - Better readability - Easier to add new processing steps
Reference:
base/knowledge-management.md
Day 4-5: Anti-Pattern Detection
-
Run Anti-Pattern Checks
- God objects (> 20 methods)
- Long methods (> 50 lines)
- Deep nesting (> 4 levels)
- Code duplication (> 6 lines)
-
Create Remediation Plan
- Track in project backlog
- Allocate 20% of sprint to tech debt
Reference:
ANTI_PATTERNS.md
Success Criteria:
- ✅ Technical debt backlog created
- ✅ Top 5 code smells addressed
- ✅ Refactoring documented in ADRs
- ✅ Anti-pattern checks automated
- Test coverage > 60%
- Integration tests added
- Quality gates enforced in CI
- SonarQube or equivalent integrated
- Technical debt identified and prioritized
- Top code smells refactored
- ADRs documented
- Anti-pattern detection automated
- Test coverage: % (Target: 60%+ for pre-production)
- Code duplication: % (Target: < 3%)
- Cyclomatic complexity: Average (Target: < 10)
- Technical debt ratio: % (Target: < 5%)
- Refactoring velocity: Story points/sprint (Target: 20% of capacity)
Establish solid architectural foundations and security practices.
Day 1-2: Document Current Architecture
-
Create Architecture Diagrams
- System context diagram
- Container diagram
- Component diagram
Reference:
base/architecture-principles.md -
Identify Architectural Principles
# Architectural Principles 1. **Separation of Concerns:** Each module has a single responsibility 2. **Dependency Inversion:** Depend on abstractions, not concretions 3. **YAGNI:** Don't build what you don't need yet 4. **12-Factor:** Follow 12-factor app principles
Day 3-5: Apply Architecture Patterns
-
Layered Architecture
Presentation Layer (API/UI) ↓ Business Logic Layer (Services) ↓ Data Access Layer (Repositories) ↓ Database -
Dependency Injection
// Example: Constructor injection class UserService { constructor( private userRepository: IUserRepository, private emailService: IEmailService ) {} async createUser(data: CreateUserDto) { const user = await this.userRepository.create(data); await this.emailService.sendWelcome(user.email); return user; } }
Reference: Framework-specific best practices in
frameworks/
Success Criteria:
- ✅ Architecture documented
- ✅ Clear separation of concerns
- ✅ Dependency injection implemented
- ✅ Team understands architecture
Day 1-2: Security Scanning
-
Dependency Scanning
# GitHub Actions - name: Run Snyk uses: snyk/actions/node@master env: SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
-
Secret Scanning
# Install git-secrets git secrets --install git secrets --register-aws -
SAST (Static Application Security Testing)
- Use CodeQL, Semgrep, or Bandit
Reference:
base/security-principles.md
Day 3-5: Security Hardening
-
Input Validation
- All API endpoints validate input
- Use schema validation (Zod, Pydantic, etc.)
-
Authentication & Authorization
- Implement JWT or OAuth
- Role-based access control
-
Security Headers
// Express example import helmet from 'helmet'; app.use(helmet({ contentSecurityPolicy: { directives: { defaultSrc: ["'self'"], styleSrc: ["'self'", "'unsafe-inline'"], }, }, }));
-
Rate Limiting
import rateLimit from 'express-rate-limit'; const limiter = rateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 100, // limit each IP to 100 requests per windowMs }); app.use('/api/', limiter);
Success Criteria:
- ✅ No high-severity vulnerabilities
- ✅ All endpoints have input validation
- ✅ Security headers configured
- ✅ Rate limiting implemented
- ✅ Secrets not in source code
- Architecture documented
- Dependency injection implemented
- Security scanning automated
- No secrets in source code
- Input validation on all endpoints
- Authentication implemented
- Security headers configured
- Rate limiting active
- Security review completed
- Security vulnerabilities: Count (Target: 0 high/critical)
- API endpoints with validation: % (Target: 100%)
- Secrets in source code: Count (Target: 0)
- Architecture documentation: Up-to-date (Target: Yes)
Implement advanced practices for AI development, observability, and optimization.
Day 1-3: AI Development Workflow
-
Implement Five-Try Rule
## Five-Try Rule 1. AI writes test (Red) 2. AI implements feature to pass test (Green) 3. If test fails, AI has 4 more attempts 4. After 5 failures, human intervention required 5. Always commit with passing tests
Reference:
base/ai-assisted-development.md -
Context Management
- Create
.context/directory - Document session context
- Maintain ADRs
Reference:
base/knowledge-management.md - Create
-
Parallel Development
- Use feature flags for concurrent work
- Implement trunk-based development
Reference:
base/parallel-development.md
Day 4-5: AI Ethics & Governance
-
Model Cards (if using ML)
# Model Card: User Recommendation Model ## Model Details - Version: 1.0.0 - Type: Collaborative Filtering - Training Data: User interaction logs (Jan-Mar 2024) ## Intended Use - Recommend products to users - Not for credit decisions or employment ## Metrics - Precision@10: 0.65 - Recall@10: 0.42 - Fairness (demographic parity): 0.92 ## Limitations - Cold start problem for new users - Bias toward popular items
Reference:
base/ai-ethics-governance.md,base/ai-model-lifecycle.md
Success Criteria:
- ✅ Five-Try Rule documented and followed
- ✅ Context management in place
- ✅ AI ethics considered (if applicable)
- ✅ Team trained on AI workflows
Day 1-3: Observability
-
Logging
import winston from 'winston'; const logger = winston.createLogger({ level: 'info', format: winston.format.json(), transports: [ new winston.transports.File({ filename: 'error.log', level: 'error' }), new winston.transports.File({ filename: 'combined.log' }), ], }); logger.info('User created', { userId: user.id, email: user.email });
-
Metrics
- Implement RED metrics (Rate, Errors, Duration)
- Use Prometheus, DataDog, or similar
-
Tracing
- Add request IDs
- Distributed tracing if microservices
Reference:
base/metrics-standards.md
Day 4-5: Performance Optimization
-
Database Optimization
- Add indexes
- Optimize queries (N+1 problem)
- Connection pooling
-
Caching
- Implement Redis or similar
- Cache expensive operations
-
Load Testing
- Use k6, Artillery, or JMeter
- Establish performance baselines
Success Criteria:
- ✅ Structured logging implemented
- ✅ Key metrics tracked
- ✅ Performance baselines established
- ✅ Caching strategy in place
- ✅ Database optimized
- Five-Try Rule implemented
- Context management active
- AI workflows documented
- Structured logging in place
- Metrics collection automated
- Performance baselines established
- Caching implemented
- Load testing completed
- Observability dashboard created
- AI development velocity: Story points/sprint
- Log coverage: % of critical paths logging
- P95 response time: ms (Target: < 500ms)
- Cache hit rate: % (Target: > 80%)
- Error rate: % (Target: < 1%)
After completing the 8-week rollout, maintain these practices:
- Code Reviews: All PRs reviewed within 24 hours
- Test Runs: Full test suite on every PR
- Security Scans: Automated on every commit
- Retrospectives: Review what's working, what's not
- Tech Debt Review: Prioritize top technical debt items
- Metrics Review: Check quality and performance metrics
- Architecture Review: Ensure architecture still meets needs
- Security Audit: Review security posture
- Dependency Updates: Update and test dependencies
- Anti-Pattern Review: Update
ANTI_PATTERNS.md - Maturity Assessment: Reassess project maturity level
- Practice Review: Add/remove practices as needed
- ✅ 95%+ commits follow conventional commits
- ✅ < 24 hour PR review time
- ✅ < 10% build failures
- ✅ 20% increase in test count
- ✅ 60%+ test coverage
- ✅ < 3% code duplication
- ✅ < 10 average cyclomatic complexity
- ✅ < 5% technical debt ratio
- ✅ 0 high/critical security vulnerabilities
- ✅ 100% API endpoints validated
- ✅ 0 secrets in source code
- ✅ Architecture documentation complete
- ✅ Five-Try Rule documented
- ✅ < 500ms P95 response time
- ✅ > 80% cache hit rate
- ✅ < 1% error rate
Symptom: "This is too much process! It slows us down."
Solution:
- Start with high-value, low-effort practices
- Show metrics improvement
- Automate everything possible
- Celebrate wins
Symptom: Too many tools, too much configuration.
Solution:
- Use integrated platforms (GitHub Actions, GitLab CI)
- Start with basics, add tools incrementally
- Document tool purposes clearly
- Provide training
Symptom: Hard to write tests for legacy code.
Solution:
- Start with new code (100% coverage requirement)
- Add tests when touching legacy code
- Use characterization tests for legacy
- Incremental improvement (5% per sprint)
Reference: base/refactoring-patterns.md
Symptom: Team accepts AI suggestions without understanding.
Solution:
- Require human review of all AI code
- Enforce Five-Try Rule with tests
- Code review checklist includes "I understand this code"
- Training on AI limitations
Reference: base/ai-assisted-development.md
Focus on:
- Phase 1 (Foundation)
- Basic testing (40% coverage)
- Minimal security (secrets, basic validation)
Skip/Defer:
- Advanced architecture patterns
- Comprehensive observability
- Performance optimization
Reference: base/project-maturity-levels.md
Focus on:
- Phases 1-3 (Foundation, Quality, Architecture)
- 60%+ test coverage
- Security hardening
- Basic observability
Skip/Defer:
- Advanced AI practices
- Complex optimization
Implement All Phases:
- 80%+ test coverage
- Full security compliance
- Comprehensive observability
- All advanced practices
PRACTICE_CROSSREFERENCE.md- Practice-to-file mappingANTI_PATTERNS.md- Common anti-patternsSUCCESS_METRICS.md- Detailed metrics definitionsbase/project-maturity-levels.md- Maturity framework- All
base/*.mdfiles - Detailed practice guidelines
This 8-week rollout plan provides a structured approach to adopting best practices. Remember:
- Be Pragmatic: Adapt the timeline and practices to your context
- Measure Success: Track metrics to show improvement
- Iterate: Review and adjust based on team feedback
- Automate: Use tools and AI assistants to reduce manual effort
- Celebrate: Recognize team achievements along the way
Questions or feedback? Update this guide based on your experience!