Skip to content
#

ai-policy

Here are 51 public repositories matching this topic...

Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via CLI wrappers

  • Updated Apr 1, 2026
  • Rust
AION-BRAIN

The left hemisphere. Frameworks, logic, and certainty architecture. Home of FSVE, AION, LAV, ASL, GENESIS, TOPOS, and 60+ epistemically validated frameworks built to make AI systems reliable, not just capable.

  • Updated Mar 22, 2026
  • Python

Official Technical Stack & Economic Engine for the NUPA Framework. Authored by Brandon Anthony Bedard (Nov 2025). Featuring the 40/40/20 Recursive Reinvestment Model and FASL Protocol

  • Updated Apr 5, 2026
  • Python
AI-acceptable-use-policy

The security layer. Every output clears here before it exits. Threat detection, adversarial pattern recognition, red-team archive, and the Go/No-Go authority that can halt the entire system. Nothing bypasses it.

  • Updated Mar 17, 2026

The presentation layer. Structure, format, and register conversion. The last layer before output — deciding how the brain speaks, not just what it says. Prose or list. Dense or clear. Report or reply.

  • Updated Mar 17, 2026

SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.

  • Updated Jan 21, 2026
  • Python

Improve this page

Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."

Learn more