A powerful, microservices-based application that leverages local Large Language Models (LLMs) to analyze resumes against job descriptions. It uses Ollama for generative AI analysis and OCR for robust text extraction from PDFs and images.
- Generative AI Analysis: Uses locally hosted LLMs (via Ollama, default:
qwen2.5:7b) to provide detailed feedback, matching scores, and recommendations. - Robust OCR: Extracts text from both image-based and text-based PDFs using Tesseract and PyMuPDF.
- Microservices Architecture: Decoupled services communicating efficiently via Apache Kafka.
- Modern UI: React-based dashboard for easy uploading and result visualization.
- Secure: User authentication and secure file storage with MinIO.
The system consists of the following Dockerized services:
- Frontend (
app-frontend): React application served via Nginx. - Backend (
app-backend): Spring Boot application handling API requests, file management, and coordination. - NLP Service (
nlp-service): Python service consuming Kafka messages, performing OCR, and interacting with Ollama. - Ollama (
ollama): Runs the local LLM inference server. - Kafka & Zookeeper: Message broker for asynchronous communication between Backend and NLP Service.
- MinIO: Object storage for resumes.
- PostgreSQL: Relational database for user data and analysis history.
- Docker and Docker Compose installed.
- RAM: At least 8GB (16GB recommended) available for running the LLM and containers.
- Disk Space: ~10GB free space (for Docker images and the ~4.7GB LLM model).
git clone <repository-url>
cd AI_AnalyserRun the following command to build and start all services:
docker-compose up -d --buildOn the very first startup, the ollama service needs to download the AI model (qwen2.5:7b, approx 4.7GB).
IMPORTANT: The analysis features will NOT work until this download completes.
You can check the progress by viewing the Ollama logs:
docker logs -f resume_ollamaWait until you see a message indicating the download is complete or the server is listening.
- Frontend: http://localhost:3000 (or port 80 depending on configuration, check
docker ps) - Backend API: http://localhost:8080
- MinIO Console: http://localhost:9001 (User/Pass:
minioadmin/minioadmin)
- Register/Login: Create an account on the frontend.
- Analyze:
- Upload a Resume (PDF, JPG, PNG).
- Paste the Job Description.
- Click Analyze.
- View Results:
- Compatibility Score: A 0-100 score indicating fit.
- Key Strengths: Skills found in your resume that match the job.
- Missing Skills: Critical gaps identified.
- Recommendation: Personalized advice generated by the AI.
- Analysis Timeout: If the first analysis hangs, it's likely because the model is still downloading. Check
docker logs resume_ollamaand wait. - Kafka Connection Errors: Services might restart if Kafka isn't ready. Docker Compose handles retries, but you can manually restart a service:
docker-compose restart app-backend
- Java 21 (Spring Boot 3)
- Python 3.9 (Spacy, Tesseract, PyMuPDF)
- React (Vite, Tailwind CSS)
- Apache Kafka
- PostgreSQL
- MinIO
- Ollama