This guide provides essential technical and regulatory updates for developers and AI practitioners building in the Generative & Agentic Era (2024β2026).
learning-ethical-ai/
β
βββ 01-tools/ # AI safety and ethics tools
β βββ README.md # Tool comparison matrix, quick start
β βββ 01-giskard/ # LLM testing & vulnerability scanning
β β βββ README.md
β β βββ config_vertexai.py # GCP Vertex AI configuration
β β βββ healthcare_scan.py # Working healthcare LLM audit
β βββ 02-nemo-guardrails/ # Runtime safety controls
β β βββ README.md
β β βββ healthcare_rails/ # Production-ready clinical guardrails
β βββ 03-model-cards/ # Model documentation & transparency
β β βββ README.md
β βββ 04-llama-guard/ # Content safety classification
β βββ README.md
β
βββ 02-examples/ # Jupyter notebooks (6 complete examples)
β βββ README.md
β βββ requirements.txt
β βββ 01-giskard-quickstart.ipynb
β βββ 02-llm-hallucination-detection.ipynb
β βββ 03-healthcare-llm-safety.ipynb
β βββ 04-clinical-guardrails.ipynb
β βββ 05-mcp-security-audit.ipynb
β βββ 06-agent-ethics-patterns.ipynb
β
βββ 04-healthcare/ # Healthcare-specific AI ethics
β βββ clinical-llm-risks.md # EHR integration risks, hallucinations
β βββ hipaa-ai-checklist.md # HIPAA compliance for AI
β βββ genomics-ethics.md # Ethical AI in genetic analysis
β βββ who-lmm-guidelines.md # WHO 2025 LMM guidance summary
β βββ synthetic-patient-data.md # Safe synthetic data generation
β
βββ 05-agentic-safety/ # MCP and agentic AI security
β βββ mcp-security-threats.md # OWASP-style MCP threat taxonomy
β βββ safe-mcp-patterns.md # OpenSSF Safe-MCP security patterns
β βββ human-in-loop-agents.md # HITL design for high-risk actions
β βββ tool-poisoning-defense.md # Defense strategies
β βββ audit-logging-agents.md # Agent decision chain tracing
β
βββ 06-governance/ # Regulatory compliance resources
β βββ eu-ai-act-checklist.md # High-risk system requirements
β βββ nist-ai-600-1-summary.md # GenAI risk profile summary
β βββ risk-tiering-template.md # AI system risk classification
β
βββ README.md # This file
# Clone repository
git clone https://github.com/lynnlangit/learning-ethical-ai.git
cd learning-ethical-ai
# Install tools
pip install giskard nemoguardrails model-card-toolkit
# Configure GCP (required for examples)
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json"
export GCP_PROJECT_ID="your-project-id"
export GCP_REGION="us-central1"cd 01-tools/giskard
python healthcare_scan.py
# Opens HTML report with safety analysiscd 02-examples
pip install -r requirements.txt
jupyter notebook
# Start with 01-giskard-quickstart.ipynbGoal: Understand GenAI safety risks and run basic tests
- Read:
04-healthcare/clinical-llm-risks.md- Understand healthcare AI risks - Practice:
02-examples/01-giskard-quickstart.ipynb- Run your first safety scan - Deploy:
01-tools/01-giskard/healthcare_scan.py- Audit a clinical LLM - Learn:
01-tools/02-nemo-guardrails/README.md- Understand runtime safety
Time: 4-6 hours | Prerequisites: Basic Python, cloud familiarity
Goal: Build HIPAA-compliant, safe healthcare AI systems
- Compliance:
04-healthcare/hipaa-ai-checklist.md- HIPAA requirements - Testing:
02-examples/03-healthcare-llm-safety.ipynb- Healthcare-specific tests - Guardrails:
02-examples/04-clinical-guardrails.ipynb- Deploy clinical safety rails - Genomics:
04-healthcare/genomics-ethics.md- Genetic AI ethics (if applicable) - Governance:
04-healthcare/who-lmm-guidelines.md- WHO standards - Documentation:
01-tools/03-model-cards/README.md- Create compliant model cards
Time: 12-16 hours | Prerequisites: Healthcare domain knowledge, Python
Goal: Secure autonomous AI agents and MCP servers
- Threats:
05-agentic-safety/mcp-security-threats.md- OWASP-style threat taxonomy - Patterns:
05-agentic-safety/safe-mcp-patterns.md- Secure MCP development - Practice:
02-examples/05-mcp-security-audit.ipynb- Audit an MCP server - Reference: spatial-mcp - Secure MCP implementation
- HITL:
05-agentic-safety/human-in-loop-agents.md- Human oversight patterns - Logging:
05-agentic-safety/audit-logging-agents.md- Decision chain tracing
Time: 10-14 hours | Prerequisites: Security fundamentals, agentic AI familiarity
Goal: Navigate EU AI Act, NIST, FDA regulations for AI systems
- Risk Assessment:
06-governance/risk-tiering-template.md- Classify your AI system - EU AI Act:
06-governance/eu-ai-act-checklist.md- High-risk system requirements - NIST Framework:
06-governance/nist-ai-600-1-summary.md- GenAI risk management - Healthcare:
04-healthcare/who-lmm-guidelines.md- WHO LMM standards - Documentation:
01-tools/03-model-cards/README.md- Required transparency docs - Testing:
01-tools/01-giskard/README.md- Pre-deployment validation
Time: 8-12 hours | Prerequisites: Regulatory/compliance background
Goal: Design ethical multi-agent systems with complex tool interactions
- Foundation: Complete Path 3 (Agentic AI Security)
- Multi-Agent:
02-examples/06-agent-ethics-patterns.ipynb- Multi-agent patterns - Tool Poisoning:
05-agentic-safety/tool-poisoning-defense.md- Supply chain security - Testing:
04-healthcare/synthetic-patient-data.md- Safe testing data generation - Project: Build a multi-agent healthcare system with full compliance
Time: 20+ hours | Prerequisites: All previous paths
| Tool | Primary Use Case | Best For | Setup | Healthcare Support | Getting Started |
|---|---|---|---|---|---|
| Giskard | LLM testing & vulnerability scanning | Quick safety audits, RAG evaluation, hallucination detection | ββ Low | β Excellent | Guide |
| NeMo Guardrails | Runtime safety controls | Production guardrails, input/output filtering, topic control | βββ Medium | β Strong | Guide |
| Model Cards Toolkit | Model documentation & transparency | Compliance documentation, model governance | β Very Low | β Good | Guide |
| Llama Guard | Content moderation | Toxicity filtering, safety classification | ββ Low | Guide |
By 2026, AI ethics has transitioned from voluntary principles to enforceable law.
The definitive global benchmark for risk-based AI regulation. It categorizes systems into Unacceptable, High, Limited, and Minimal risk.
- Official: EU AI Act Compliance Tracker
- Implementation Guide: 06-governance/eu-ai-act-checklist.md
- Actionable for Devs: Check the GPAI Code of Practice
A specialized extension of the NIST Risk Management Framework (RMF). It provides 12 high-level risks, including "Confabulation" (Hallucination) and "CBRN" information access.
- Official: NIST AI RMF Resource Center
- Summary: 06-governance/nist-ai-600-1-summary.md
With the rise of the Model Context Protocol (MCP) and multi-agent systems, "ethics" now includes preventing autonomous loop failures and unauthorized tool use.
- Threat Taxonomy: 05-agentic-safety/mcp-security-threats.md - OWASP-style threat model
- Secure Patterns: 05-agentic-safety/safe-mcp-patterns.md - OpenSSF guidelines
- Reference Implementation: spatial-mcp - Secure geospatial MCP server
- Security Audit: 02-examples/05-mcp-security-audit.ipynb
- HITL Patterns: 05-agentic-safety/human-in-loop-agents.md
- OECD AI Principles: 2024/2025 Update
For computational bioinformaticians and healthcare AI developers, ethical AI involves the safe orchestration of synthetic patient data and genomics.
New standards for transparency and accountability when using Generative AI for disease detection and treatment.
- Official: WHO Health AI Ethics Portal
- Summary: 04-healthcare/who-lmm-guidelines.md
- Clinical Risks: 04-healthcare/clinical-llm-risks.md - EHR integration, hallucinations
- HIPAA Compliance: 04-healthcare/hipaa-ai-checklist.md
- Genomics Ethics: 04-healthcare/genomics-ethics.md - AI in genetic analysis
- Synthetic Data: 04-healthcare/synthetic-patient-data.md
- NIH Guidelines: NIH AI Guidelines
Before deploying your AI system:
- Risk Tiering: Classify your system using 06-governance/risk-tiering-template.md
- Safety Testing: Run Giskard comprehensive scan (see 01-tools/01-giskard/)
- Guardrails: Implement NeMo Guardrails for runtime safety (see 01-tools/02-nemo-guardrails/)
- Compliance: Review EU AI Act requirements if deploying in EU (see 06-governance/eu-ai-act-checklist.md)
- Healthcare: If clinical use, check HIPAA compliance (see 04-healthcare/hipaa-ai-checklist.md)
- Agentic: If using MCP, audit security (see 05-agentic-safety/mcp-security-threats.md)
- Human Oversight: Implement HITL for high-risk actions (see 05-agentic-safety/human-in-loop-agents.md)
- Documentation: Create Model Card (see 01-tools/03-model-cards/)
- Audit Logging: Enable comprehensive logging (see 05-agentic-safety/audit-logging-agents.md)
- Giskard - LLM testing
- NeMo Guardrails - Runtime safety
- OpenSSF Safe-MCP - MCP security
- Model Cards Toolkit
- spatial-mcp - Secure geospatial MCP server reference
MIT License - See LICENSE file for details
Lynn Langit
- Background: Mayo Clinic / Genomics
- Focus: Healthcare AI ethics, cloud architecture, precision medicine
- GitHub: @lynnlangit
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Submit a pull request
For major changes, please open an issue first to discuss proposed changes.
Last Updated: January 2026 Status: Active development - Repository reflects current 2026 standards for ethical AI
