AI-assisted, evidence-backed first-pass admissions review system.
Human reviewers decide. AI surfaces the evidence.
inVision U Admissions Copilot is a reviewer-facing AI tool that helps human admissions reviewers make faster, more consistent first-pass screening decisions — without replacing their judgment.
AI supports. Humans decide.
Every score is tied to a verifiable evidence quote. Missing evidence is surfaced explicitly. Hidden potential is flagged as a signal for human follow-up, not as an auto-decision.
invision-u/
├── frontend/ → Next.js 15 reviewer UI
├── ai_engine/ → FastAPI AI analysis backend
└── video_pipeline/ → Video interview transcript pipeline
| Feature | Description |
|---|---|
| Evidence Map | Every rubric score links to exact quotes from submitted text |
| Missing Evidence Detector | Surfaces unresolved gaps with reviewer follow-up prompts |
| Trajectory Lens | Timeline: past context → turning point → recent initiative → future direction |
| Hidden Potential Alert | Flags high-substance / low-polish candidates that polish-sensitive review might miss |
| Blind Mode | Hides candidate name and region during initial scoring |
| Compare View | Side-by-side calibration of 2–3 candidates |
| Human Override | Reviewers can annotate, override, and add notes on every recommendation |
| Confidence Breakdown | Decomposed confidence across evidence strength, input sufficiency, fit stability |
Reviewer-facing web application with full admissions workflow.
Stack: Next.js 15 · React 19 · TypeScript · Tailwind CSS · Framer Motion
Routes:
| Route | Purpose |
|---|---|
/ |
Landing page — product framing and trust cues |
/dashboard |
Reviewer queue with filtering, evidence coverage, flags |
/candidate |
Candidate intake form with file upload and demo loading |
/review/[id] |
Flagship review report — full rubric, evidence map, trajectory |
/compare |
Side-by-side candidate calibration |
/trust |
Methodology, validation posture, limitation disclosure |
cd frontend
npm install
npm run dev # http://localhost:3000
npm run build # production build
npm run lintStructured AI analysis engine for candidate answers and transcripts.
Stack: Python 3.11+ · FastAPI · Pydantic v2 · OpenAI-compatible LLM client
What it does:
- Scores 6 rubric criteria on 0–10 scale
- Extracts and validates evidence quotes against submitted text
- Computes
raw_total_score_0_100anddecision_adjusted_score_0_100 - Deterministic routing:
strong_shortlist_signal/standard_review/needs_manual_review/insufficient_evidence - Surfaces hidden potential, trajectory lens, and reviewer packet
- Supports compare mode for 2–3 candidates
Supported programmes:
- Sociology: Leadership and Innovation
- Innovative IT Product Design and Development
- Public Policy and Development
- Digital Media and Marketing
- Creative Engineering
cd ai_engine
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # set LLM_BACKEND=mock for local demo
uvicorn app.main:app --reload --port 8000API endpoints:
| Method | Path | Description |
|---|---|---|
GET |
/health |
Health check |
POST |
/analyse-candidate |
Full candidate analysis |
POST |
/compare-candidates |
Compare 2–3 candidates |
POST |
/validate-demo |
Run demo validation set |
Swagger docs: http://localhost:8000/docs
Transcript-first reviewer module for video interviews.
Stack: Python 3.11+ · faster-whisper · transformers · Pydantic v2
Pipeline:
media file → ASR (Whisper) → cleanup → segment bucketing
→ LLM analysis → evidence validation → routing → reviewer JSON
Also supports transcript-only mode (no video required):
from invisionu_team_handoff_pipeline import run_transcript_first_demo_pipeline
result = run_transcript_first_demo_pipeline(
transcript_text="I want to study here because...",
model=None,
tokenizer=None,
selected_program="Creative Engineering",
)
print(result["analysis"]["parsed_json"]["routing"])cd video_pipeline
pip install -r requirements.txt
python -m unittest discover -s tests -v# Terminal 1 — AI Engine backend
cd ai_engine
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # LLM_BACKEND=mock works without a real model
uvicorn app.main:app --port 8000 --reload
# Terminal 2 — Next.js frontend
cd frontend
npm install
echo "AI_ENGINE_URL=http://localhost:8000" > .env.local
npm run devOpen http://localhost:3000.
The frontend proxies analysis requests to the Python backend at localhost:8000.
If the backend is unreachable, it falls back to the built-in mock analyser automatically.
┌─────────────────────────────────────────────────────┐
│ Browser / Reviewer │
└────────────────────┬────────────────────────────────┘
│ HTTP
┌────────────────────▼────────────────────────────────┐
│ Next.js 15 (frontend/) │
│ │
│ /candidate → POST /api/v1/candidates/with-analysis (bridge route)
│ /review/[id] │
│ /dashboard │
│ /compare │
└────────────────────┬────────────────────────────────┘
│ JSON (server-side)
┌────────────────────▼────────────────────────────────┐
│ FastAPI (ai_engine/) :8000 │
│ │
│ POST /analyse-candidate │
│ POST /compare-candidates │
│ │
│ core/analysis_service → LLMClient │
│ core/evidence_validator │
│ core/program_fit │
│ core/rules (deterministic routing) │
│ core/trajectory │
└────────────────────┬────────────────────────────────┘
│ optional
┌────────────────────▼────────────────────────────────┐
│ Local LLM (Qwen3-8B or any OpenAI-compatible) │
│ OR mock backend (no model required) │
└─────────────────────────────────────────────────────┘
- Fork or push this repo to GitHub
- Deploy
ai_engine/on Railway — connect repo, set root directory toai_engine - Deploy
frontend/on Vercel — connect repo, set root directory tofrontend - In Vercel → Settings → Environment Variables:
AI_ENGINE_URL=https://your-ai-engine.up.railway.app
docker compose up --buildServices:
frontend→ port 3000ai_engine→ port 8000 (internal)
Deploy only frontend/ to Vercel. Leave AI_ENGINE_URL unset.
All analysis will use the local mock engine — no Python server needed.
| Variable | Default | Description |
|---|---|---|
AI_ENGINE_URL |
(empty) | Python backend URL. Leave empty to use mock fallback |
| Variable | Default | Description |
|---|---|---|
LLM_BACKEND |
mock |
mock · openai · local_openai_compatible |
LOCAL_LLM_BASE_URL |
http://127.0.0.1:8001/v1 |
Local LLM server URL |
LOCAL_LLM_MODEL |
Qwen/Qwen3-8B |
Model name |
ENABLE_PROGRAM_FIT |
true |
Programme-fit scoring |
ENABLE_REVIEWER_VOICE |
false |
LLM-rewritten reviewer summaries |
- Evidence-backed — every score requires supporting quotes from submitted text
- Human-in-the-loop — no auto-admit, no auto-reject; all outputs are reviewer-facing
- Conservative under uncertainty — missing or weak evidence routes to manual review
- Fail loudly — backend fails explicitly rather than silently returning mock data in production
- No appearance inference — face, emotion, accent, background, and socioeconomic proxies are strictly excluded
- Auto-admit or auto-reject candidates
- Use face, emotion, appearance, accent, or demographic inference
- Claim to detect AI-written responses
- Make success predictions
- Replace human judgment
The seeded dataset includes eight pre-analysed candidates across the five programmes:
| Candidate | Tag | Programme |
|---|---|---|
| Maya Thomas | Polished but generic | Digital Media and Marketing |
| Daniel Okeke | Underpolished, high potential | Creative Engineering |
| Leila Rahman | Strong fit, programme mismatch | Innovative IT Product Design |
| + 5 more | Various archetypes | All five programmes |
PRs welcome. Please keep the scope narrow — this tool is intentionally conservative and human-centred. Avoid adding features that automate decisions or reduce reviewer visibility.
MIT — see LICENSE.