This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
-
Updated
Mar 4, 2026
This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via CLI wrappers
The comprehensive, community-maintained index of Australian AI Security standards, policies, and frameworks across all 11 jurisdictions.
The left hemisphere. Frameworks, logic, and certainty architecture. Home of FSVE, AION, LAV, ASL, GENESIS, TOPOS, and 60+ epistemically validated frameworks built to make AI systems reliable, not just capable.
Centralized AI IDE rules management for Cursor and GitHub Copilot.
Artificial Intelligence Regulation Interface & Agreements
Official Technical Stack & Economic Engine for the NUPA Framework. Authored by Brandon Anthony Bedard (Nov 2025). Featuring the 40/40/20 Recursive Reinvestment Model and FASL Protocol
This repository provides comprehensive guidelines, frameworks, and sample policies for the ethical and effective integration of AI in progressive organizations. It serves as a platform for discussion and collaboration on AI governance and ethics.
Non-Human Identity Disclosure Standard for Healthcare Voice Workflows
The standard protocol for defining runtime guardrails for your enterprise agents with a mission of trustworthy and reliable agentic systems 🛡️
The Uncomfortable Coexistence of Job Destruction and Labor shortages
Curated dataset and tools for tracking global AI legislation — US federal, state, and international frameworks.
Customizable AI Acceptable Use Policy and governance framework for US enterprises. MIT licensed. Covers compliance, HR, infosec, and legal.
The security layer. Every output clears here before it exits. Threat detection, adversarial pattern recognition, red-team archive, and the Go/No-Go authority that can halt the entire system. Nothing bypasses it.
The presentation layer. Structure, format, and register conversion. The last layer before output — deciding how the brain speaks, not just what it says. Prose or list. Dense or clear. Report or reply.
SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.
The memory layer. Stores the FCL validation archive, routing event history, and the permanent record of everything the brain has learned. What survives here shapes how the whole system routes next time.
Independent research on human-centered AI and LLMs | Policy frameworks for responsible AI | A collaborative space for researchers, innovators, and policymakers advancing ethical, inclusive AI
Add a description, image, and links to the ai-policy topic page so that developers can more easily learn about it.
To associate your repository with the ai-policy topic, visit your repo's landing page and select "manage topics."