Skip to content

Comments

Add Blindfold: PII detection and protection for LLM apps#254

Open
michalvich wants to merge 1 commit intotensorchord:mainfrom
michalvich:add-blindfold
Open

Add Blindfold: PII detection and protection for LLM apps#254
michalvich wants to merge 1 commit intotensorchord:mainfrom
michalvich:add-blindfold

Conversation

@michalvich
Copy link

What is Blindfold?

Blindfold is a PII detection and protection API built specifically for AI applications. It detects and tokenizes personally identifiable information (names, emails, phone numbers, addresses, SSNs, and more) across 18+ languages before data reaches LLMs.

Why it belongs in Frameworks for LLM security

Blindfold addresses a critical LLM security concern: preventing PII leakage to language model providers. It sits in the data pipeline before LLM calls and:

  • Detects PII using NER models across 18+ languages
  • Protects data via tokenization, redaction, masking, or encryption
  • Enforces compliance with built-in GDPR, HIPAA, and PCI DSS policies
  • Provides Python SDK (blindfold-sdk), Node.js SDK (@blindfold/sdk), LangChain integration, and MCP server
  • Offers EU and US regional endpoints for data residency requirements

Placement

Added to the Frameworks for LLM security table in alphabetical order (before Plexiglass), per contribution guidelines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant