A practical toolkit to run AI vendor due diligence in a repeatable way:
- A question bank (YAML) + human-friendly questionnaire (Markdown)
- A scoring rubric across key risk domains
- A report generator producing an executive summary + findings + guardrails
This is designed as a portfolio project for AI Governance / GRC work.
- Use case fit
- Business integration
- Use of confidential data
- Business resiliency
- Potential for exposure
python -m venv .venv source .venv/bin/activate pip install -r requirements.txt python scripts/score_vendor.py --responses examples/vendor_responses_risky.json
This produces:
out/vendor_report.md
questionnaires/
- vendor_ai_security_privacy.md
- question_bank.yaml
rubric/
- scoring_rubric.yaml
- scripts/
- score_vendor.py
examples/
- vendor_responses_good.json
- vendor_responses_risky.json
out/
- (generated reports go here)
- Executive decision: Accept / Accept with guardrails / Reject
- Domain score breakdown
- Findings and recommended controls
- Evidence checklist (what to request from the vendor)
MIT