Automated monitoring, scraping, analysis, and notification system for trading reports delivered via email.
- Automated Email Monitoring: Webhook-based email monitoring with security validation (IP whitelisting, rate limiting, signature verification)
- Browser Automation: Playwright-based scraping with login handling and dynamic content extraction
- AI Analysis: Powered by Claude (Anthropic) or Groq for intelligent report summarization
- Unified Multi-Channel Notifications: Extensible notification service with email (SMTP) and Telegram (Bot API)
- Reliable Delivery: Exponential backoff retry, per-channel rate limiting, and complete audit trails
- Persistent Storage: PostgreSQL database with full audit trail and append-only delivery logs
- Scalable Architecture: Microservices design with Docker containers
- Visual Workflow: N8N for easy workflow management and modification
- Health Monitoring & Alerting: Continuous service monitoring with automatic CRITICAL error detection and batched email alerts
- Monitoring: Built-in health checks and optional Prometheus/Grafana stack
Email β Webhook Service β N8N β [Playwright Scraper] β PostgreSQL
(Security checks) β
(IP validation) [AI Analyzer] β Analysis Results β PostgreSQL
(Rate limiting) β
(Signature valid) [Unified Notification Service]
β
ββββββ΄βββββ
Email (SMTP) Telegram (Bot API)
(retry+rate) (retry+rate)
ββββββ¬βββββ
Delivery Audit Log
Health Monitoring (Continuous - every 60 seconds)
ββ> Checks all 5 services
ββ> Error detection & batched alerts
ββ> Sends alerts via Notification Service
- N8N - Workflow orchestration and automation
- Email Webhook Service - Email event processing with security validation
- IP whitelisting and ban management
- Rate limiting and signature validation
- Email parsing and extraction
- Playwright Scraper - Browser automation and web scraping
- AI Analyzer - Report analysis using Claude/Groq
- Unified Notification Service - Multi-channel delivery with retry logic and rate limiting
- Email channel (SMTP with HTML templates)
- Telegram channel (Bot API with message splitting)
- Extensible for future channels (SMS, Slack, Discord)
- Health Monitoring Service - Continuous service health checks and alerting
- Background health check loop (every 60 seconds)
- Error log collection and analysis
- Alert generation with batching
- Automatic email alerts on critical issues
- PostgreSQL - Data persistence with audit trails
- Redis - Caching and rate limiting (optional)
- Traefik - Reverse proxy with SSL (optional)
- Docker 27.3+ and Docker Compose 2.29+
- 8GB RAM minimum (16GB recommended)
- 50GB disk space
- Domain name with DNS configured (for webhooks)
- API Keys:
- Anthropic API key OR Groq API key
- Telegram Bot Token
- SMTP credentials
git clone <your-repo>
cd trading-report-automation
make setupEdit .env file with your credentials:
# Essential configuration
POSTGRES_PASSWORD=your_secure_password
ANTHROPIC_API_KEY=sk-ant-...
TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
TELEGRAM_CHAT_ID=your_chat_id
# Trading portal credentials
TRADING_PORTAL_USERNAME=your_username
TRADING_PORTAL_PASSWORD=your_password
# Email webhook
WEBHOOK_URL=https://trading-hook.yourdomain.com
# SMTP for sending emails
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=your-email@gmail.com
SMTP_PASSWORD=your_app_password# Development mode (with logs)
make dev
# Production mode (detached)
make prodmake healthExpected output:
β Playwright Scraper: UP
β AI Analyzer: UP
β Notification Service: UP
β N8N: UP
β PostgreSQL: UP
- Sign up at Mailgun
- Add your domain and verify DNS records
- Create a route:
- Filter:
match_recipient("reports@yourdomain.com") - Action:
forward("https://trading-hook.yourdomain.com/webhook")
- Filter:
- Forward your trading report emails to
reports@yourdomain.com
- Create SendGrid account
- Configure Inbound Parse
- Set webhook URL:
https://trading-hook.yourdomain.com/webhook
If webhooks aren't available, N8N can poll your email inbox:
- Open N8N:
http://localhost:5678 - Import workflow:
services/n8n/workflows/email-monitor.json - Configure IMAP trigger with your email credentials
# 1. Create bot
# Talk to @BotFather on Telegram
# Send: /newbot
# Follow instructions to get token
# 2. Get your Chat ID
# Talk to @userinfobot on Telegram
# It will show your chat ID
# 3. Add to .env
TELEGRAM_BOT_TOKEN=your_bot_token
TELEGRAM_CHAT_ID=your_chat_id- Point your domain to your server IP
- Update
.env:WEBHOOK_URL=https://trading-hook.yourdomain.com ACME_EMAIL=your@email.com
- Traefik will automatically get Let's Encrypt SSL certificate
See infrastructure/nginx/README.md for Nginx setup instructions.
- Access N8N:
http://localhost:5678 - Login with credentials from
.env - Go to Workflows β Import from File
- Import all workflows from
services/n8n/workflows/ - Activate workflows
| Service | URL | Purpose |
|---|---|---|
| N8N | http://localhost:5678 | Workflow orchestration |
| AI Analyzer API | http://localhost:8000 | Report analysis service |
| Email Webhook | http://localhost:8001 | Email processing & validation |
| Playwright Scraper API | http://localhost:8002 | Browser automation & scraping |
| Health Monitoring API | http://localhost:8003 | Health checks & error logging |
| Notification API | http://localhost:8004 | Multi-channel notification delivery |
| PostgreSQL | localhost:5432 | Data persistence |
See .env.example for complete list of configuration options.
Key variables:
ANTHROPIC_API_KEYorGROQ_API_KEY- Choose your AI providerTRADING_PORTAL_USERNAME/PASSWORD- Your trading portal credentialsTELEGRAM_BOT_TOKEN- From @BotFatherSMTP_*- Email sending configuration
You can manually trigger report processing:
# Test webhook
curl -X POST http://localhost:5678/webhook/test \
-H "Content-Type: application/json" \
-d '{
"from": "test@example.com",
"subject": "Trading Report",
"body": "Report available at: https://portal.example.com/report/123"
}'Each service has interactive API docs:
- AI Analyzer: http://localhost:8000/docs
- Email Webhook: http://localhost:8001/docs
- Playwright Scraper: http://localhost:8002/docs
- Health Monitoring: http://localhost:8003/docs
- Notifications: http://localhost:8004/docs
Unified Notification Service Endpoints:
POST /api/v1/notify- Send notification to one or more channelsPOST /api/v1/notify/batch- Send batch notificationsGET /api/v1/delivery-status/{id}- Query delivery statusGET /api/v1/delivery-stats- Aggregate delivery statisticsGET /api/v1/channels- List enabled channels and statusGET /health- Service health check
# Run all tests
make test
# Run specific service tests
cd services/playwright-scraper && pytest
# Run with coverage
make test-cov
# Integration tests
make test-integration# Check all services
make health
# View logs
make logs
# Service-specific logs
make logs-service SERVICE=playwright-scraperStart monitoring stack:
make monitorAccess dashboards:
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin)
The system includes a dedicated health monitoring service that continuously checks all services and generates alerts:
# View aggregate health status of all services
curl http://localhost:8003/api/v1/health/status
# Get recent CRITICAL errors
curl 'http://localhost:8003/api/v1/health/errors?severity=CRITICAL&hours=24'
# Trigger manual health check for a specific service
curl -X POST http://localhost:8003/api/v1/health/check/ai-analyzer
# View health monitoring logs
make logs-service SERVICE=health-monitoringFeatures:
- Continuous service health monitoring (every 60 seconds)
- Automatic alert generation with batching to prevent alert storms
- Persistent health check history and error logs
- Configurable data retention policies
- CRITICAL error detection with automatic email alerts
See HEALTH_MONITORING.md in /docs for detailed operations manual.
# Format code
make format
# Run linting
make lint
# Type checking
make typecheck
# Fix auto-fixable issues
make fix# Start with hot reload
make dev
# Open shell in a service
make shell SERVICE=playwright-scraper
# Database shell
make db-shell- Modify service code in
services/<service>/src/ - Code changes auto-reload in development mode
- Add tests in
services/<service>/tests/ - Run
make testto verify - Update N8N workflows if needed
- Never commit
.envfile - Use.env.exampleas template - Rotate API keys regularly - Update in
.envand restart services - Use strong passwords - For database and N8N
- Enable firewall - Only expose necessary ports
- Keep services updated - Run
docker-compose pullregularly - Monitor logs - Check for suspicious activity
- Backup database - Run
make db-backupdaily
# Backup database
make db-backup
# Backup N8N workflows
docker cp trading-n8n:/home/node/.n8n/workflows ./backups/n8n-workflows-$(date +%Y%m%d)# Restore database
make db-restore
# Choose backup file when prompted# Check logs
make logs
# Verify .env is configured
cat .env
# Clean and restart
make clean
make dev-build# Check service logs
make logs-service SERVICE=playwright-scraper
# Test manually
curl -X POST http://localhost:8000/api/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://portal.example.com/report/123",
"credentials": {
"username": "your_username",
"password": "your_password"
}
}'# Verify API key
echo $ANTHROPIC_API_KEY
# Check service logs
make logs-service SERVICE=ai-analyzer
# Test API connectivity
curl -X POST http://localhost:8001/health# Check notification service
make logs-service SERVICE=notification-service
# Test notification service health
curl http://localhost:8003/health
# Check channel status
curl http://localhost:8003/api/v1/channels
# Query delivery status of a notification
curl http://localhost:8003/api/v1/delivery-status/{notification_id}
# View delivery statistics
curl http://localhost:8003/api/v1/delivery-stats
# Verify Telegram bot
curl "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/getMe"
# Test SMTP connectivity
telnet ${SMTP_HOST} ${SMTP_PORT}
# Check database for failed notifications
make db-shell
# Then run: SELECT * FROM notifications WHERE status = 'failed' ORDER BY created_at DESC LIMIT 10;trading-report/
βββ agent-os/ # Standards & product documentation
β βββ product/ # Product mission, roadmap, tech stack
β βββ specs/ # Implementation specs & tasks
β βββ standards/ # Code standards (backend, testing)
βββ database/ # Database migrations & schemas
β βββ migrations/ # 13 sequential SQL migrations
β βββ maintenance/ # Database maintenance scripts
β βββ schemas/
βββ docs/ # Project documentation
β βββ CLAUDE.md # AI assistant instructions
β βββ DOCUMENTATION.md # Master documentation index
β βββ HEALTH_MONITORING.md # Operations manual
β βββ USERS_MANUAL.md # User guide
βββ services/ # Microservices
β βββ email-webhook/ # Email processing with security validation
β βββ playwright-scraper/ # Browser automation & scraping
β βββ ai-analyzer/ # Report analysis
β βββ notification-service/ # Multi-channel notifications
β βββ health-monitoring/ # Health checks & alerting
β βββ credential-service/ # Credential storage (optional)
βββ scripts/ # Utility scripts
βββ docker-compose.yml # Docker orchestration
βββ Makefile # Development commands (40+ targets)
βββ README.md # This file
βββ .env.example # Configuration template
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature - Make changes and test:
make test - Commit:
git commit -m 'Add amazing feature' - Push:
git push origin feature/amazing-feature - Open Pull Request
This project is licensed under the MIT License.
- Issues: GitHub Issues
- Documentation:
/docsfolder - Email: your@email.com
- Email webhook service & email parsing
- Secure credential storage & management
- Playwright browser automation
- Report content extraction & scraping
- PostgreSQL schema & data persistence
- AI report analysis with Groq/Anthropic API
- Unified notification service (email + Telegram with retry & rate limiting)
- Email notification delivery with SMTP integration
- Telegram notification delivery with Bot API
- Webhook endpoint security (IP whitelisting, signature validation, rate limiting, IP bans)
- Health monitoring & error logging (continuous health checks, alert batching, automatic alerts)
- N8N workflow integration & orchestration
- Comprehensive documentation & operator manuals
- Support for multiple trading portals
- Additional notification channels (SMS, Slack, Discord)
- Advanced pattern recognition in reports
- Historical trend analysis
- Mobile app for notifications
- Multi-user support
- Custom AI model training
- Real-time dashboard
- Use Redis for caching - Already configured
- Limit concurrent scrapes - Set
MAX_CONCURRENT_SCRAPES=3in.env - Optimize images - Set lower resolution in scraper config
- Prune old data - Set up cron job to delete old reports
- Scale services - Increase replicas in
docker-compose.yml
# Example: Scale playwright service
playwright-scraper:
deploy:
replicas: 3Your Name - @yourtwitter Project Link: https://github.com/yourusername/trading-report-automation