A complete AI-powered job matching system featuring semantic search, knowledge graphs, and intelligent job recommendations.
- Semantic Job Search: Natural language queries with AI understanding
- Resume Matching: Upload resumes and get personalized job recommendations
- Knowledge Graph: Visualize skill relationships and job connections
- AI-Powered Reranking: Intelligent job scoring with detailed explanations
- Multi-Database Architecture: Elasticsearch for search, Neo4j for relationships
Before you begin, ensure you have:
- Docker Desktop installed and running
- 8GB+ RAM available for Docker
- 10GB+ free disk space
- Internet connection for pulling Docker images
# 1. Make the installer executable
chmod +x install.sh
# 2. Run the installer
./install.sh# Run in PowerShell
.\install.ps1When you run the installer for the first time:
- It will create a
.envfile from.env.example - Optional: Edit the
.envfile to add:ANTHROPIC_API_KEY: For AI-powered reranking explanations (optional)
Example .env:
ANTHROPIC_API_KEY=sk-ant-xxx # Optional - for AI featuresNote: Docker images are pre-configured:
- Backend:
mvyas7/job-hunt-ai-backend:latest - Frontend:
mvyas7/job-hunt-ai-frontend:latest
No additional configuration needed!
The installation script will:
- Check that Docker is installed and running
- Pull pre-built images from Docker Hub
- Start all services (Elasticsearch, Neo4j, Backend, Frontend)
- Initialize demo data
- Display access URLs
Installation takes 2-3 minutes depending on your internet speed.
After installation, access the application at:
| Service | URL | Description |
|---|---|---|
| Frontend | http://localhost:3001 | Main web interface |
| Backend API | http://localhost:8000 | REST API endpoints |
| API Documentation | http://localhost:8000/docs | Interactive API docs (Swagger) |
| Elasticsearch | http://localhost:9200 | Search engine |
| Neo4j Browser | http://localhost:7474 | Graph database UI |
Username: neo4j
Password: password
Watch a complete demonstration of the JobMatch AI platform in action:
The video showcases all key features including semantic search, resume upload, AI-powered matching, and query-based reranking.
The landing page welcomes users with a clean, modern interface showcasing the platform's capabilities. Users can choose to start searching immediately or upload their resume for personalized recommendations. The dashboard displays key metrics including 10,000+ jobs indexed, 5,000+ skills mapped, 500+ companies, and a 95% success rate.
The search interface provides multiple dimensions for job discovery. Users can enter natural language queries with intelligent search suggestions based on different search criteria:
- Role & Location: Target specific positions and geographic preferences
- Skills & Technologies: Search by technical skills, frameworks, and tools
- Remote Work & Benefits: Filter by work arrangements and company perks
- Visa Sponsorship: Find opportunities with H1B support and relocation assistance
- Industry & Domain: Focus on specific sectors like fintech or healthcare
- Salary & Compensation: Set salary expectations and equity preferences
When searching without a resume, the AI provides detailed explanations for why each job appears in the results. The system analyzes multiple factors:
- Skills Match: Direct and indirect skill alignments with the query
- Experience Level: How the job's requirements match the search criteria
- Location: Geographic fit based on preferences
- Salary: Compensation alignment with market expectations
- Job Description Fit: Semantic analysis of role requirements
- Company & Benefits: Company culture and benefits matching
Each factor is weighted and explained, helping users understand the relevance score.
After uploading a resume, the platform provides personalized job recommendations with comprehensive match analysis:
- Overall Match Score: Calculated based on resume content and job requirements
- Skills Match (70% weight): Detailed breakdown of matching technical and soft skills
- Experience Level (60% weight): Alignment between candidate experience and job expectations
- Location (20% weight): Geographic preferences and remote work options
- Salary (50% weight): Compensation expectations vs. job offerings
- Company & Benefits (40% weight): Cultural fit and benefits alignment
The AI explains not just what matches, but why, providing transparency into the recommendation engine.
The smart search feature allows users to refine results using keyword-based reranking. After an initial search, users can use the dropdown to select specific keywords from their query (like "backend developer", "microservices architect", "PostgreSQL", "Django", etc.) to dynamically reorder results. This gives users fine-grained control over prioritization while maintaining the AI-powered relevance scoring. The system shows 9490+ jobs with match percentages, and users can further filter by work location, experience level, employment type, and working schedule.
Try these example searches:
- "software engineer python machine learning"
- "data scientist with NLP experience"
- "frontend developer react typescript"
- "devops kubernetes cloud"
The system understands natural language and semantic meaning, not just keywords.
- Click "Upload Resume" in the frontend
- Select a PDF or DOCX file
- Get personalized job recommendations
- See detailed match explanations
- Go to http://localhost:7474
- Login with Neo4j credentials
- Run queries to explore:
// View all jobs MATCH (j:Job) RETURN j LIMIT 25 // View job-skill relationships MATCH (j:Job)-[:REQUIRES]->(s:Skill) RETURN j, s LIMIT 50 // Find similar jobs MATCH (j1:Job)-[:SIMILAR_TO]->(j2:Job) RETURN j1, j2 LIMIT 25
Visit http://localhost:8000/docs to:
- Explore all API endpoints
- Try API calls interactively
- See request/response schemas
- Test with sample data
Edit .env to customize:
# Optional: Enable AI features (Anthropic Claude API)
ANTHROPIC_API_KEY=sk-ant-xxx
# Database credentials (default for demo)
NEO4J_USER=neo4j
NEO4J_PASSWORD=passwordNote: Docker images are pre-configured in docker-compose.yml:
mvyas7/job-hunt-ai-backend:v1.2.0mvyas7/job-hunt-ai-frontend:v1.3.1
If you have port conflicts, edit docker-compose.yml:
ports:
- "3001:8080" # Change 3001 to another port
- "8000:8000" # Change 8000 to another port# All services
docker compose logs -f
# Specific service
docker compose logs -f backend
docker compose logs -f frontenddocker compose downdocker compose restart
# Or restart specific service
docker compose restart backend# Linux/Mac
./uninstall.sh
# Windows
.\uninstall.ps1
# Or manually
docker compose down -v# Pull latest images
docker compose pull
# Restart with new images
docker compose up -dCheck Docker resources:
- Open Docker Desktop
- Go to Settings → Resources
- Increase Memory to at least 8GB
- Apply & Restart
Check logs:
docker compose logsEdit docker-compose.yml and change the port numbers:
# Change this:
ports:
- "3001:8080"
# To this (example):
ports:
- "3002:8080"Verify images exist:
- Check images on Docker Hub:
- Try manual pull:
docker pull mvyas7/job-hunt-ai-backend:latest docker pull mvyas7/job-hunt-ai-frontend:latest
Wait longer - the backend needs time to:
- Start the Python application
- Connect to Elasticsearch
- Connect to Neo4j
- Download NLP models (first run)
This can take 2-3 minutes on first startup.
Check backend is running:
curl http://localhost:8000/healthCheck Docker network:
docker network inspect jobmatch_networkClean up Docker:
# Remove unused containers and images
docker system prune -a
# Remove volumes (WARNING: deletes data)
docker volume prune- Docker Desktop
- 8GB RAM
- 10GB disk space
- 2 CPU cores
- 16GB RAM
- 20GB disk space
- 4 CPU cores
- SSD storage
The system architecture consists of three main components:
-
Data Ingestion Pipeline (Offline)
- Ingests jobs from external sources (Rise, LinkedIn) and user queries/resumes
- Processes data through NLP Service using SBERT embeddings and NER
- Stores entities and relations in Neo4j Knowledge Graph
- Indexes job text in Elasticsearch for fast retrieval
-
Online Search & Ranking
- Hybrid Search Service: Combines three search strategies
- Graph Traversal: Finds jobs via skill relationships in Neo4j
- Vector Search: Semantic similarity matching using embeddings
- BM25 Retrieval: Traditional keyword-based search from Elasticsearch
- Candidate Fusion: Intelligently merges results from all three approaches
- Hybrid Search Service: Combines three search strategies
-
Explainable Reranking
- Reranking Service: Uses deterministic utility function to score jobs
- Computes factor scores based on Skills, Experience, Location, and Salary
- Generative Layer: Anthropic Claude LLM provides natural language explanations
- Delivers ranked results with transparent explanations to the frontend
┌─────────────┐
│ Frontend │ React + TypeScript
│ (Nginx) │ Port 3001
└──────┬──────┘
│
▼
┌─────────────┐
│ Backend │ FastAPI + Python
│ (uvicorn) │ Port 8000
└──────┬──────┘
│
├────────┐
▼ ▼
┌──────────┐ ┌─────┐
│ Elastic │ │Neo4j│
│ search │ │Graph│
│ :9200 │ │:7474│
└──────────┘ └─────┘
Frontend:
- React 18
- TypeScript
- Material-UI
- Axios
- Nginx
Backend:
- Python 3.11
- FastAPI
- Elasticsearch 8.11
- Neo4j 5.14
- spaCy
- Sentence Transformers
- PyTorch
The backend provides a REST API endpoint to ingest jobs from CSV files. This endpoint processes the CSV and imports jobs into both Elasticsearch and Neo4j databases.
Endpoint: POST /api/v1/csv/ingest-csv
Parameters:
file: CSV file to upload (required)index_to_elasticsearch: Index jobs to Elasticsearch (default: true)create_neo4j_nodes: Create Neo4j nodes (default: true)process_with_nlp: Process jobs with NLP (default: true)batch_size: Number of jobs to process per batch (default: 100)
Example using curl:
# Basic ingestion with default parameters
curl -X POST "http://localhost:8000/api/v1/csv/ingest-csv" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "file=@jobs/SDE-Nov21.csv"
# Advanced ingestion with custom parameters
curl -X POST "http://localhost:8000/api/v1/csv/ingest-csv" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "file=@jobs/SDE-Nov21.csv" \
-F "index_to_elasticsearch=true" \
-F "create_neo4j_nodes=true" \
-F "process_with_nlp=true" \
-F "batch_size=50"Response: The API will return a JSON response with ingestion statistics:
{
"status": "success",
"jobs_processed": 1234,
"elasticsearch_indexed": 1234,
"neo4j_nodes_created": 1234,
"processing_time": "45.2s"
}Note: The ingestion process may take several minutes depending on the CSV size and whether NLP processing is enabled. For large CSV files (10,000+ jobs), consider using a smaller batch size (e.g., 50) to avoid memory issues.
To load your own job data:
- Prepare JSON file with job listings
- Copy to backend container:
docker cp jobs.json jobmatch_backend:/app/
- Load data:
docker exec jobmatch_backend python -m app.scripts.load_jobs jobs.json
Search jobs:
curl "http://localhost:8000/api/v1/search?query=python+developer"Get job by ID:
curl "http://localhost:8000/api/v1/jobs/1"Upload resume:
curl -X POST "http://localhost:8000/api/v1/resume/analyze" \
-F "file=@resume.pdf"- Quick Start Guide - Detailed usage examples
- API Documentation - When running
- Docker not installed: Install Docker Desktop
- Port conflicts: Change ports in docker-compose.yml
- Out of memory: Increase Docker memory limit
- Slow startup: First run downloads models (wait 5 minutes)
# Check all containers
docker ps -a
# Check specific service health
docker exec jobmatch_backend curl http://localhost:8000/health
# View Elasticsearch indices
curl http://localhost:9200/_cat/indices
# Neo4j connection test
docker exec jobmatch_neo4j cypher-shell -u neo4j -p password "RETURN 1"This is a demo package for evaluation purposes:
- Uses default passwords (change in production)
- No HTTPS (add reverse proxy in production)
- No authentication (add in production)
- No rate limiting (add in production)
- Runs on localhost only
Do NOT expose to the internet without security hardening.
This demo package is provided for evaluation purposes.
For issues with this demo package, please check:
- This README
- QUICK_START.md guide
- Docker logs:
docker compose logs
Enjoy exploring JobMatch AI! 🚀





