A Full-Stack Local AI Code Intelligence Workspace
Chat with your repositories locally. No data leaks, zero telemetry.
Explore the Docs Β· View Demo Β· Report Bug Β· Request Feature
π Table of Contents
CodeLens AI is an advanced, privacy-first code intelligence platform. By ingesting GitHub repositories or local ZIP archives, the system builds an intelligent, fully searchable vector index using PostgreSQL and pgvector.
Leveraging local AI models via Ollama, it provides a highly accurate Retrieval-Augmented Generation (RAG) pipeline. Ask targeted, complex technical questions and receive grounded code snippets and direct file referencesβall while ensuring your intellectual property never leaves your local machine.
- π§ Robust RAG Pipeline: Intelligent code chunking paired with vector embeddings generated purely locally (
nomic-embed-text). - β‘ High-Performance Search: Sub-second similarity retrieval utilizing
pgvectorstored directly alongside metadata in PostgreSQL. - π Zero Telemetry & 100% Privacy: Because it runs entirely on local LLMs (like
llama3), NO code is ever sent to external third-party APIs like OpenAI or Anthropic. - π‘οΈ Multi-Tenant Security Architecture: Built-in JWT authentication ensuring complete data isolation and user-scoped workspaces.
- π¨ Dev-Optimized UI: A sleek, reactive three-panel interface featuring syntax highlighting and real-time navigation.
Click below to view the application in action!
(Alternatively, you can π Watch the HD Demo Video on Google Drive here)
I have compiled a comprehensive gallery of the entire application flow (including the secure Auth login, repository uploading, main dashboard workspace, and RAG answer snippets) into a single optimized presentation document.
π View Full Screenshot Gallery (Screenshots.pdf)
This project was built using modern, industry-standard technologies to ensure scalability, security, and developer ergonomics:
| Area | Technologies |
|---|---|
| Frontend | |
| Backend | |
| Database | pgvector |
| AI Runtime | llama3, nomic-embed-text) |
| Infrastructure |
The overarching system utilizes a standard RESTful, decoupled MVC architecture with an integrated vector/llm pipeline component.
flowchart LR
%% Client Tier
subgraph Client [Client Tier]
A[React SPA Web UI]
end
%% Application Tier
subgraph Server [Spring Boot Application Tier]
C[REST API Controllers]
G[RAG Orchestrator]
H[Auth / Security Filter]
end
%% Storage & AI Tier
subgraph Data [Infrastructure Tier]
D[(PostgreSQL + pgvector)]
E[Ollama Embedded AI]
end
A -- "JWT Secured HTTP" --> H
H --> C
C --> G
G -- "JDBC / SQL" --> D
G -- "Local API (Gen & Embed)" --> E
π Technical Deep Dive: The Ingestion & RAG Pipeline Flow
- Ingestion & Fetching: Source code retrieved via URL clone or internal ZIP extraction parsing.
- Filtering & Normalization: Application collects target source files, stripping out unnecessary binaries.
- Smart Chunking: Text is split into overlapping chunks to safeguard immediate coding context.
- Vector Embedding: Chunks are sent to the local Ollama daemon to produce high-dimensional vector representations.
- Persistent Storage: Metadata (file path, user repo IDs) and embeddings are stored securely in PostgreSQL using the
pgvectorextension. - Query Phase: User queries are similarly vectorized. Vector similarity search executes.
- Generative Phase: Top
Ksemantic chunks + Query are combined into a system prompt forllama3to generate a precise response.
Ready to get this up and running? Follow these steps to spin up the local AI environment.
Ensure you have the following installed on your machine:
1οΈβ£ Start the Data Infrastructure (PostgreSQL + pgvector)
docker compose up -dDatabase will map internally to port
5433to prevent conflicts.
2οΈβ£ Setup Local AI Daemon (Ollama)
Keep this process running in a separate terminal window.
ollama pull llama3
ollama pull nomic-embed-text
ollama serve3οΈβ£ Launch the Backend API
Spin up the Spring Boot server.
./run-backend.sh4οΈβ£ Launch the Web Interface
Spin up the React Vite frontend.
./run-frontend.shπ Success! Navigate to
http://localhost:5173in your browser.
The entire backend environment is configurable via standard environment variables or .env.
| Environment Variable | Default Overrides | Purpose |
|---|---|---|
DB_HOST / DB_PORT |
localhost / 5433 |
Database connection network |
DB_USER / DB_PASSWORD |
codelens_user / codelens_pass |
PostgreSQL credentials |
OLLAMA_BASE_URL |
http://localhost:11434 |
Address for standard Ollama Daemon |
OLLAMA_CHAT_MODEL |
llama3 |
Primary generation LLM |
OLLAMA_EMBED_MODEL |
nomic-embed-text |
Primary embedding LLM |
CODELENS_RAG_TOP_K |
8 |
Chunk depth for RAG indexing |
CODELENS_JWT_SECRET |
(Standard Dev Minimum Key) | JWT signing token |
We built CodeLens with a structured, intuitive REST API. All non-auth routes require an Authorization: Bearer <token> header.
Click to View Available Endpoints
| Security | Verb | Endpoint | Description |
|---|---|---|---|
| π | POST |
/api/auth/signup |
Registers new tenant account |
| π | POST |
/api/auth/signin |
Retrieves short-lived JWT token |
| π | GET |
/api/auth/me |
Validates JWT authenticity |
| π | POST |
/api/repo/upload |
Ingests .zip or GitHub URL payload |
| π | GET |
/api/repo/{id} |
Fetches full repository metadata tree |
| π | POST |
/api/ai/ask |
Submits standard text prompt to RAG pipeline |
If you run into issues, try these common fixes:
- Database Connection Refused: Ensure
docker psshows the container is healthy. Verify there are no colliding instances on port5433. - Ollama Timeout / Connection Error: Ensure you actually typed
ollama serveand left it running. Test it by curling:curl http://localhost:11434. - 401 Unauthorized during upload: Ensure your frontend successfully logged in and the token is propagating to
localStorage.
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.
If this project helped you learn about RAG or AI integrations, please consider giving it a β!