Skip to content

lynnlangit/learning-ethical-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

93 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Learning Ethical AI

Ethical AI Repository

Last Updated Python License GCP


πŸ›‘οΈ Ethical AI: The 2026 Resource Guide

This guide provides essential technical and regulatory updates for developers and AI practitioners building in the Generative & Agentic Era (2024–2026).

πŸ“‚ Repository Structure

learning-ethical-ai/
β”‚
β”œβ”€β”€ 01-tools/                    # AI safety and ethics tools
β”‚   β”œβ”€β”€ README.md                  # Tool comparison matrix, quick start
β”‚   β”œβ”€β”€ 01-giskard/                   # LLM testing & vulnerability scanning
β”‚   β”‚   β”œβ”€β”€ README.md
β”‚   β”‚   β”œβ”€β”€ config_vertexai.py     # GCP Vertex AI configuration
β”‚   β”‚   └── healthcare_scan.py     # Working healthcare LLM audit
β”‚   β”œβ”€β”€ 02-nemo-guardrails/          # Runtime safety controls
β”‚   β”‚   β”œβ”€β”€ README.md
β”‚   β”‚   └── healthcare_rails/      # Production-ready clinical guardrails
β”‚   β”œβ”€β”€ 03-model-cards/              # Model documentation & transparency
β”‚   β”‚   └── README.md
β”‚   └── 04-llama-guard/              # Content safety classification
β”‚       └── README.md
β”‚
β”œβ”€β”€ 02-examples/                 # Jupyter notebooks (6 complete examples)
β”‚   β”œβ”€β”€ README.md
β”‚   β”œβ”€β”€ requirements.txt
β”‚   β”œβ”€β”€ 01-giskard-quickstart.ipynb
β”‚   β”œβ”€β”€ 02-llm-hallucination-detection.ipynb
β”‚   β”œβ”€β”€ 03-healthcare-llm-safety.ipynb
β”‚   β”œβ”€β”€ 04-clinical-guardrails.ipynb
β”‚   β”œβ”€β”€ 05-mcp-security-audit.ipynb
β”‚   └── 06-agent-ethics-patterns.ipynb
β”‚
β”œβ”€β”€ 04-healthcare/               # Healthcare-specific AI ethics
β”‚   β”œβ”€β”€ clinical-llm-risks.md      # EHR integration risks, hallucinations
β”‚   β”œβ”€β”€ hipaa-ai-checklist.md      # HIPAA compliance for AI
β”‚   β”œβ”€β”€ genomics-ethics.md         # Ethical AI in genetic analysis
β”‚   β”œβ”€β”€ who-lmm-guidelines.md      # WHO 2025 LMM guidance summary
β”‚   └── synthetic-patient-data.md  # Safe synthetic data generation
β”‚
β”œβ”€β”€ 05-agentic-safety/           # MCP and agentic AI security
β”‚   β”œβ”€β”€ mcp-security-threats.md    # OWASP-style MCP threat taxonomy
β”‚   β”œβ”€β”€ safe-mcp-patterns.md       # OpenSSF Safe-MCP security patterns
β”‚   β”œβ”€β”€ human-in-loop-agents.md    # HITL design for high-risk actions
β”‚   β”œβ”€β”€ tool-poisoning-defense.md  # Defense strategies
β”‚   └── audit-logging-agents.md    # Agent decision chain tracing
β”‚
β”œβ”€β”€ 06-governance/               # Regulatory compliance resources
β”‚   β”œβ”€β”€ eu-ai-act-checklist.md     # High-risk system requirements
β”‚   β”œβ”€β”€ nist-ai-600-1-summary.md   # GenAI risk profile summary
β”‚   └── risk-tiering-template.md   # AI system risk classification
β”‚
└── README.md                    # This file

πŸš€ Quick Start

Install Dependencies

# Clone repository
git clone https://github.com/lynnlangit/learning-ethical-ai.git
cd learning-ethical-ai

# Install tools
pip install giskard nemoguardrails model-card-toolkit

# Configure GCP (required for examples)
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json"
export GCP_PROJECT_ID="your-project-id"
export GCP_REGION="us-central1"

Run Your First Safety Scan

cd 01-tools/giskard
python healthcare_scan.py
# Opens HTML report with safety analysis

Explore Jupyter Notebooks

cd 02-examples
pip install -r requirements.txt
jupyter notebook
# Start with 01-giskard-quickstart.ipynb

πŸŽ“ Learning Paths

Path 1: AI Safety Beginner (Start Here!)

Goal: Understand GenAI safety risks and run basic tests

  1. Read: 04-healthcare/clinical-llm-risks.md - Understand healthcare AI risks
  2. Practice: 02-examples/01-giskard-quickstart.ipynb - Run your first safety scan
  3. Deploy: 01-tools/01-giskard/healthcare_scan.py - Audit a clinical LLM
  4. Learn: 01-tools/02-nemo-guardrails/README.md - Understand runtime safety

Time: 4-6 hours | Prerequisites: Basic Python, cloud familiarity


Path 2: Healthcare AI Developer (Clinical Focus)

Goal: Build HIPAA-compliant, safe healthcare AI systems

  1. Compliance: 04-healthcare/hipaa-ai-checklist.md - HIPAA requirements
  2. Testing: 02-examples/03-healthcare-llm-safety.ipynb - Healthcare-specific tests
  3. Guardrails: 02-examples/04-clinical-guardrails.ipynb - Deploy clinical safety rails
  4. Genomics: 04-healthcare/genomics-ethics.md - Genetic AI ethics (if applicable)
  5. Governance: 04-healthcare/who-lmm-guidelines.md - WHO standards
  6. Documentation: 01-tools/03-model-cards/README.md - Create compliant model cards

Time: 12-16 hours | Prerequisites: Healthcare domain knowledge, Python


Path 3: Agentic AI Security Engineer (MCP Focus)

Goal: Secure autonomous AI agents and MCP servers

  1. Threats: 05-agentic-safety/mcp-security-threats.md - OWASP-style threat taxonomy
  2. Patterns: 05-agentic-safety/safe-mcp-patterns.md - Secure MCP development
  3. Practice: 02-examples/05-mcp-security-audit.ipynb - Audit an MCP server
  4. Reference: spatial-mcp - Secure MCP implementation
  5. HITL: 05-agentic-safety/human-in-loop-agents.md - Human oversight patterns
  6. Logging: 05-agentic-safety/audit-logging-agents.md - Decision chain tracing

Time: 10-14 hours | Prerequisites: Security fundamentals, agentic AI familiarity


Path 4: AI Compliance Officer (Regulatory Focus)

Goal: Navigate EU AI Act, NIST, FDA regulations for AI systems

  1. Risk Assessment: 06-governance/risk-tiering-template.md - Classify your AI system
  2. EU AI Act: 06-governance/eu-ai-act-checklist.md - High-risk system requirements
  3. NIST Framework: 06-governance/nist-ai-600-1-summary.md - GenAI risk management
  4. Healthcare: 04-healthcare/who-lmm-guidelines.md - WHO LMM standards
  5. Documentation: 01-tools/03-model-cards/README.md - Required transparency docs
  6. Testing: 01-tools/01-giskard/README.md - Pre-deployment validation

Time: 8-12 hours | Prerequisites: Regulatory/compliance background


Path 5: Advanced - Multi-Agent Ethics Patterns

Goal: Design ethical multi-agent systems with complex tool interactions

  1. Foundation: Complete Path 3 (Agentic AI Security)
  2. Multi-Agent: 02-examples/06-agent-ethics-patterns.ipynb - Multi-agent patterns
  3. Tool Poisoning: 05-agentic-safety/tool-poisoning-defense.md - Supply chain security
  4. Testing: 04-healthcare/synthetic-patient-data.md - Safe testing data generation
  5. Project: Build a multi-agent healthcare system with full compliance

Time: 20+ hours | Prerequisites: All previous paths


πŸ› οΈ Tool Comparison Matrix

Tool Primary Use Case Best For Setup Healthcare Support Getting Started
Giskard LLM testing & vulnerability scanning Quick safety audits, RAG evaluation, hallucination detection ⭐⭐ Low βœ… Excellent Guide
NeMo Guardrails Runtime safety controls Production guardrails, input/output filtering, topic control ⭐⭐⭐ Medium βœ… Strong Guide
Model Cards Toolkit Model documentation & transparency Compliance documentation, model governance ⭐ Very Low βœ… Good Guide
Llama Guard Content moderation Toxicity filtering, safety classification ⭐⭐ Low ⚠️ Limited Guide

πŸ›οΈ Global Governance & Compliance (2026 Update)

By 2026, AI ethics has transitioned from voluntary principles to enforceable law.

EU AI Act (Full Enforcement August 2026)

The definitive global benchmark for risk-based AI regulation. It categorizes systems into Unacceptable, High, Limited, and Minimal risk.

NIST AI 600-1: Generative AI Profile (2025)

A specialized extension of the NIST Risk Management Framework (RMF). It provides 12 high-level risks, including "Confabulation" (Hallucination) and "CBRN" information access.


πŸ€– Agentic Safety & Security

With the rise of the Model Context Protocol (MCP) and multi-agent systems, "ethics" now includes preventing autonomous loop failures and unauthorized tool use.

MCP Security Resources

Human-in-the-Loop Best Practices


🧬 Bio-Ethics & Precision Medicine

For computational bioinformaticians and healthcare AI developers, ethical AI involves the safe orchestration of synthetic patient data and genomics.

WHO Guidance on Large Multi-Modal Models (LMMs) for Health (2025)

New standards for transparency and accountability when using Generative AI for disease detection and treatment.

Healthcare AI Resources


βœ… Developer "Ethics-by-Design" Checklist

Before deploying your AI system:


πŸ”— Key Resources

Official Guidelines

Tools & Frameworks

Related Projects


πŸ“ License

MIT License - See LICENSE file for details


πŸ‘€ Author

Lynn Langit

  • Background: Mayo Clinic / Genomics
  • Focus: Healthcare AI ethics, cloud architecture, precision medicine
  • GitHub: @lynnlangit

🀝 Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Submit a pull request

For major changes, please open an issue first to discuss proposed changes.


Last Updated: January 2026 Status: Active development - Repository reflects current 2026 standards for ethical AI

About

Resources to learn how to implement ethical AI

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors 2

  •  
  •