Skip to content

Extending information theory to quantify perspectival contradiction with K(P).

License

MIT, CC-BY-4.0 licenses found

Licenses found

MIT
LICENSE
CC-BY-4.0
LICENSE-CC-BY-4.0
Notifications You must be signed in to change notification settings

off-by-some/contrakit

Repository files navigation

Contrakit

Contrakit banner

GitHub Stars GitHub Forks GitHub Issues PyPI Python License: MIT Docs

When multiple experts give conflicting advice about the same problem, most systems try to force artificial consensus or pick a single "winner."

Contrakit takes a different approach: it measures exactly how much those perspectives actually contradict—in bits.

What is Contrakit?

Most tools treat disagreement as error—something to iron out until every model or expert agrees. But not all clashes are noise. Some are structural: valid perspectives that simply refuse to collapse into one account. Contrakit is the first Python toolkit to measure that irreducible tension, and to treat it as information—just as Shannon treated randomness.

Our work has shown it's not only measurable, but it's useful too.

But What Does it Do, Practically?

It's a general-purpose yardstick for measuring disagreement. We've used contrakit to quantify structural tension across several domains:

  • In quantum systems, $K(P)$ measures "how quantum" a system is—whether you're looking at Bell inequalities, KCBS polytopes, or magic squares, the measure stays consistent and comparable (quantum examples).
  • In neural networks, $K(P)$ computed from task structure alone predicts minimum hallucination rates before any training happens (hallucination experiments).
  • In statistical paradoxes like Simpson's, $K(P)$ reveals exactly how much the aggregated view contradicts the stratified view (statistical examples), even in cases MI returns 0.

Quickstart

⚠️ Under Construction: This project is currently under active development. Currently i'm in the process of translating all of my Coq formalizations, notebooks, and personal scripts into API functionality and documentation. The core functionality is ready to use, but APIs, documentation, and features will change.

Install:

pip install contrakit

Quickstart:

from contrakit import Observatory

# 1) Model perspectives
obs = Observatory.create(symbols=["Yes","No"])
Y = obs.concept("Outcome")
with obs.lens("ExpertA") as A: A.perspectives[Y] = {"Yes": 0.8, "No": 0.2}
with obs.lens("ExpertB") as B: B.perspectives[Y] = {"Yes": 0.3, "No": 0.7}

# 2) Export behavior and quantify reconcilability
behavior = (A | B).to_behavior()  # compose lenses → behavior
print("alpha*:", round(behavior.alpha_star, 3))  # 0.965 (high agreement)
print("K(P):  ", round(behavior.contradiction_bits, 3), "bits")  # 0.051 bits (low cost)

# 3) Where to look next (witness design)
witness = behavior.least_favorable_lambda()
print("lambda*:", witness)  # ~0.5 each expert (balanced conflict)

Why This Matters

When perspectives clash, three quantities emerge. $α^\star$ measures how close they can get to a single account—the best-case agreement coefficient. $K(P)$ measures the cost of forcing consensus—the bits you pay to pretend they agree. $λ^\star$ identifies which contexts drive the conflict—where the tension concentrates.

Just as entropy priced randomness, $K(P)$ prices contradiction. In quantum contextuality, it measures which measurement scenarios create irreducible tension. In neural network hallucination, it predicts minimum error rates from task structure before training. In statistical paradoxes, it quantifies how much aggregated and stratified views contradict.

Computational systems have long handled multiple perspectives by forcing consensus or averaging them away. Contrakit measures epistemic tension itself, treating contradiction as structured information rather than noise. When experts or models disagree, each contradiction points toward boundaries of current understanding.

When perspectives clash, contrakit measures it, $λ^\star$ reveals where to investigate, and the structure of disagreement guides the next reasoning step.

Quantifying epistemic tension reveals not only how well multiple viewpoints can be reconciled, but what each viewpoint is capable of—how far it can stretch, where it breaks, and what it leaves out.

The K(P) Tax

The measure follows from six axioms about how perspectives should combine. From these, a unique formula emerges: contradiction bits $K(P)$, built from the Bhattacharyya overlap between distributions. The measure behaves consistently across domains—from distributed consensus to ensemble learning to quantum contextuality.

Contradiction imposes an exact cost. Across compression, communication, and simulation, disagreement costs $K(P)$ bits per symbol. Engineering tasks that must reconcile contextual data face real performance deficits—compression needs extra bits, communication loses capacity, simulation incurs variance penalties.

Task Impact
Compression/shared representation $+K(P)$ extra bits needed
Communication with disagreement $-K(P)$ bits of capacity lost
Simulation with conflicting models $×(2^{2K(P)} - 1)$ variance penalty

$λ^\star$ targets measurements where contradiction concentrates. Mixing in feasible "compromise" distributions reduces $K(P)$.

API Reference

  • Core classes: Observatory for modeling perspectives, Behavior for analyzing distributions, Space for defining observable systems
  • Key properties: contradiction_bits (the $K(P)$ measure), alpha_star (maximum agreement coefficient)
  • Key methods: least_favorable_lambda() (witness weights showing where conflict concentrates), to_behavior() (convert lens compositions to analyzable behaviors)
  • Full docs: API reference | Mathematical theory

Examples

# Epistemic modeling
poetry run python examples/intuitions/day_or_night.py
poetry run python examples/statistics/simpsons_paradox.py

# Quantum contextuality (writes analysis to figures/)
poetry run python -m examples.quantum.run

# Neural network hallucination experiments
poetry run python examples/hallucinations/run.py

Installing from Source

# Clone the repository
$ git clone https://github.com/off-by-some/contrakit.git && cd contrakit

# Install dependencies
$ poetry install

# Run tests
$ poetry run pytest -q

A Mathematical Theory of Contradiction

Contrakit implements the formal framework from A Mathematical Theory of Contradiction. The paper presents six axioms about how perspectives should combine, derives the unique measure $K(P)$, and proves its consequences across compression, communication, and simulation. Mathematical details, proofs, and axiom structure are in docs/paper/.

License

Dual-licensed: MIT for code (LICENSE), CC BY 4.0 for docs/figures (LICENSE-CC-BY-4.0).

Citation

@software{bridges2025contrakit,
  author = {Bridges, Cassidy},
  title  = {Contrakit: A Python Library for Contradiction},
  year   = {2025},
  url    = {https://github.com/off-by-some/contrakit},
  license= {MIT, CC-BY-4.0}
}

About

Extending information theory to quantify perspectival contradiction with K(P).

Resources

License

MIT, CC-BY-4.0 licenses found

Licenses found

MIT
LICENSE
CC-BY-4.0
LICENSE-CC-BY-4.0

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages