Skip to content

Danchouvzv/invision-u

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

inVision U — Admissions Copilot

AI-assisted, evidence-backed first-pass admissions review system.
Human reviewers decide. AI surfaces the evidence.

Next.js FastAPI Python TypeScript License: MIT


What is this?

inVision U Admissions Copilot is a reviewer-facing AI tool that helps human admissions reviewers make faster, more consistent first-pass screening decisions — without replacing their judgment.

AI supports. Humans decide.

Every score is tied to a verifiable evidence quote. Missing evidence is surfaced explicitly. Hidden potential is flagged as a signal for human follow-up, not as an auto-decision.


Repository structure

invision-u/
├── frontend/          → Next.js 15 reviewer UI
├── ai_engine/         → FastAPI AI analysis backend
└── video_pipeline/    → Video interview transcript pipeline

Core features

Feature Description
Evidence Map Every rubric score links to exact quotes from submitted text
Missing Evidence Detector Surfaces unresolved gaps with reviewer follow-up prompts
Trajectory Lens Timeline: past context → turning point → recent initiative → future direction
Hidden Potential Alert Flags high-substance / low-polish candidates that polish-sensitive review might miss
Blind Mode Hides candidate name and region during initial scoring
Compare View Side-by-side calibration of 2–3 candidates
Human Override Reviewers can annotate, override, and add notes on every recommendation
Confidence Breakdown Decomposed confidence across evidence strength, input sufficiency, fit stability

Packages

frontend/ — Next.js reviewer UI

Reviewer-facing web application with full admissions workflow.

Stack: Next.js 15 · React 19 · TypeScript · Tailwind CSS · Framer Motion

Routes:

Route Purpose
/ Landing page — product framing and trust cues
/dashboard Reviewer queue with filtering, evidence coverage, flags
/candidate Candidate intake form with file upload and demo loading
/review/[id] Flagship review report — full rubric, evidence map, trajectory
/compare Side-by-side candidate calibration
/trust Methodology, validation posture, limitation disclosure
cd frontend
npm install
npm run dev          # http://localhost:3000
npm run build        # production build
npm run lint

ai_engine/ — FastAPI AI analysis backend

Structured AI analysis engine for candidate answers and transcripts.

Stack: Python 3.11+ · FastAPI · Pydantic v2 · OpenAI-compatible LLM client

What it does:

  • Scores 6 rubric criteria on 0–10 scale
  • Extracts and validates evidence quotes against submitted text
  • Computes raw_total_score_0_100 and decision_adjusted_score_0_100
  • Deterministic routing: strong_shortlist_signal / standard_review / needs_manual_review / insufficient_evidence
  • Surfaces hidden potential, trajectory lens, and reviewer packet
  • Supports compare mode for 2–3 candidates

Supported programmes:

  1. Sociology: Leadership and Innovation
  2. Innovative IT Product Design and Development
  3. Public Policy and Development
  4. Digital Media and Marketing
  5. Creative Engineering
cd ai_engine
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env   # set LLM_BACKEND=mock for local demo
uvicorn app.main:app --reload --port 8000

API endpoints:

Method Path Description
GET /health Health check
POST /analyse-candidate Full candidate analysis
POST /compare-candidates Compare 2–3 candidates
POST /validate-demo Run demo validation set

Swagger docs: http://localhost:8000/docs


video_pipeline/ — Video interview transcript pipeline

Transcript-first reviewer module for video interviews.

Stack: Python 3.11+ · faster-whisper · transformers · Pydantic v2

Pipeline:

media file  →  ASR (Whisper)  →  cleanup  →  segment bucketing
            →  LLM analysis  →  evidence validation  →  routing  →  reviewer JSON

Also supports transcript-only mode (no video required):

from invisionu_team_handoff_pipeline import run_transcript_first_demo_pipeline

result = run_transcript_first_demo_pipeline(
    transcript_text="I want to study here because...",
    model=None,
    tokenizer=None,
    selected_program="Creative Engineering",
)
print(result["analysis"]["parsed_json"]["routing"])
cd video_pipeline
pip install -r requirements.txt
python -m unittest discover -s tests -v

Quick start — run everything together

# Terminal 1 — AI Engine backend
cd ai_engine
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env   # LLM_BACKEND=mock works without a real model
uvicorn app.main:app --port 8000 --reload

# Terminal 2 — Next.js frontend
cd frontend
npm install
echo "AI_ENGINE_URL=http://localhost:8000" > .env.local
npm run dev

Open http://localhost:3000.
The frontend proxies analysis requests to the Python backend at localhost:8000.
If the backend is unreachable, it falls back to the built-in mock analyser automatically.


Architecture

┌─────────────────────────────────────────────────────┐
│                   Browser / Reviewer                 │
└────────────────────┬────────────────────────────────┘
                     │  HTTP
┌────────────────────▼────────────────────────────────┐
│         Next.js 15  (frontend/)                     │
│                                                     │
│  /candidate  →  POST /api/v1/candidates/with-analysis  (bridge route)
│  /review/[id]                                        │
│  /dashboard                                          │
│  /compare                                            │
└────────────────────┬────────────────────────────────┘
                     │  JSON  (server-side)
┌────────────────────▼────────────────────────────────┐
│         FastAPI  (ai_engine/)   :8000                │
│                                                     │
│  POST /analyse-candidate                            │
│  POST /compare-candidates                           │
│                                                     │
│  core/analysis_service  →  LLMClient                │
│  core/evidence_validator                            │
│  core/program_fit                                   │
│  core/rules  (deterministic routing)                │
│  core/trajectory                                    │
└────────────────────┬────────────────────────────────┘
                     │  optional
┌────────────────────▼────────────────────────────────┐
│  Local LLM  (Qwen3-8B or any OpenAI-compatible)     │
│  OR  mock backend  (no model required)              │
└─────────────────────────────────────────────────────┘

Deploy

Option A — Vercel + Railway (recommended)

  1. Fork or push this repo to GitHub
  2. Deploy ai_engine/ on Railway — connect repo, set root directory to ai_engine
  3. Deploy frontend/ on Vercel — connect repo, set root directory to frontend
  4. In Vercel → Settings → Environment Variables:
    AI_ENGINE_URL=https://your-ai-engine.up.railway.app
    

Option B — Docker Compose (single server)

docker compose up --build

Services:

  • frontend → port 3000
  • ai_engine → port 8000 (internal)

Option C — Frontend only (mock mode)

Deploy only frontend/ to Vercel. Leave AI_ENGINE_URL unset.
All analysis will use the local mock engine — no Python server needed.


Environment variables

frontend/.env.local

Variable Default Description
AI_ENGINE_URL (empty) Python backend URL. Leave empty to use mock fallback

ai_engine/.env

Variable Default Description
LLM_BACKEND mock mock · openai · local_openai_compatible
LOCAL_LLM_BASE_URL http://127.0.0.1:8001/v1 Local LLM server URL
LOCAL_LLM_MODEL Qwen/Qwen3-8B Model name
ENABLE_PROGRAM_FIT true Programme-fit scoring
ENABLE_REVIEWER_VOICE false LLM-rewritten reviewer summaries

Design principles

  • Evidence-backed — every score requires supporting quotes from submitted text
  • Human-in-the-loop — no auto-admit, no auto-reject; all outputs are reviewer-facing
  • Conservative under uncertainty — missing or weak evidence routes to manual review
  • Fail loudly — backend fails explicitly rather than silently returning mock data in production
  • No appearance inference — face, emotion, accent, background, and socioeconomic proxies are strictly excluded

What this system does NOT do

  • Auto-admit or auto-reject candidates
  • Use face, emotion, appearance, accent, or demographic inference
  • Claim to detect AI-written responses
  • Make success predictions
  • Replace human judgment

Demo candidates

The seeded dataset includes eight pre-analysed candidates across the five programmes:

Candidate Tag Programme
Maya Thomas Polished but generic Digital Media and Marketing
Daniel Okeke Underpolished, high potential Creative Engineering
Leila Rahman Strong fit, programme mismatch Innovative IT Product Design
+ 5 more Various archetypes All five programmes

Contributing

PRs welcome. Please keep the scope narrow — this tool is intentionally conservative and human-centred. Avoid adding features that automate decisions or reduce reviewer visibility.


License

MIT — see LICENSE.


Built for inVision U · Human reviewers in the loop, always.

About

AI-assisted admissions copilot — reviewer-facing, evidence-backed first-pass screening. Next.js 15 + FastAPI + LLM engine. Human reviewers decide, AI surfaces the evidence.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors