🐢 Open-Source Evaluation & Testing library for LLM Agents
-
Updated
Feb 9, 2026 - Python
🐢 Open-Source Evaluation & Testing library for LLM Agents
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
A Python package to assess and improve fairness of machine learning models.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, reports, responsible scale policies, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Papers and online resources related to machine learning fairness
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
PyTorch package to train and audit ML models for Individual Fairness
👋 Influenciae is a Tensorflow Toolbox for Influence Functions
[ACL 2020] Towards Debiasing Sentence Representations
[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models
[ACM 2024] Jurity: Fairness & Evaluation Library
Talks & Workshops by the CODAIT team
A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.
Add a description, image, and links to the fairness-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-ai topic, visit your repo's landing page and select "manage topics."