Store ZODB blobs in S3-compatible object storage.
- Wraps any ZODB base storage (FileStorage, RelStorage, MappingStorage, ...)
- Works with any S3-compatible service (AWS S3, MinIO, Ceph, DigitalOcean Spaces)
- Local LRU filesystem cache for fast reads
- Full ZODB two-phase commit integration (transactional safety)
- ZConfig integration for
zope.confconfiguration - Supports MVCC storages (
new_instance()) - Garbage collection of orphaned S3 objects during
pack()
pip install zodb-s3blobsAdd %import zodb_s3blobs and use the <s3blobstorage> section wrapping any base storage.
%import zodb_s3blobs
<zodb_db main>
<s3blobstorage>
bucket-name my-zodb-blobs
s3-endpoint-url http://minio:9000
s3-access-key $S3_ACCESS_KEY
s3-secret-key $S3_SECRET_KEY
cache-dir /var/cache/zodb-s3-blobs
cache-size 2GB
<filestorage>
path /var/lib/zodb/Data.fs
</filestorage>
</s3blobstorage>
</zodb_db>ZConfig expands $VARIABLE and ${VARIABLE} from the process environment.
For production, consider omitting s3-access-key and s3-secret-key entirely
and relying on the boto3 credential chain (IAM roles, instance profiles,
~/.aws/credentials, or the AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY
environment variables).
When wrapping RelStorage, zodb-s3blobs overrides RelStorage's blob handling.
Blobs go to S3 instead of the blob_chunk table. RelStorage still handles object data (pickles) in the RDBMS.
%import zodb_s3blobs
<zodb_db main>
<s3blobstorage>
bucket-name my-zodb-blobs
cache-dir /var/cache/zodb-s3-blobs
cache-size 2GB
<relstorage>
<postgresql>
dsn dbname='zodb' user='zodb' host='localhost'
</postgresql>
</relstorage>
</s3blobstorage>
</zodb_db>| Parameter | Default | Description |
|---|---|---|
bucket-name |
(required) | S3 bucket name |
s3-prefix |
"" |
Key prefix in bucket |
s3-endpoint-url |
None |
For MinIO, Ceph, etc. |
s3-region |
None |
AWS region |
s3-access-key |
None |
Uses boto3 credential chain if omitted. Use $ENV_VAR substitution — never hardcode credentials. |
s3-secret-key |
None |
Uses boto3 credential chain if omitted. Use $ENV_VAR substitution — never hardcode credentials. |
s3-use-ssl |
true |
Whether to use SSL for S3 connections |
s3-addressing-style |
auto |
S3 addressing style: path, virtual, or auto |
s3-sse-customer-key |
None |
Base64-encoded 256-bit key for SSE-C encryption. Requires SSL. |
cache-dir |
(required) | Local cache directory path |
cache-size |
1GB |
Maximum local cache size |
zodb-s3blobs uses the same proxy/wrapper pattern as ZODB's built-in BlobStorage. It wraps any base storage via __getattr__ and explicitly overrides all blob methods so they always take precedence.
storeBlob: Object data (pickle) is stored in the base storage. The blob file is staged locally.tpc_vote: Staged blobs are uploaded to S3. If any upload fails, the transaction aborts cleanly.tpc_finish: No S3 operations (this method must not fail per ZODB contract). Staged files are moved into the local cache.tpc_abort: Uploaded S3 objects are deleted (best-effort). Local staged files are cleaned up.
blobs/{oid_hex}/{tid_hex}.blob
With a configured prefix: {prefix}/blobs/{oid_hex}/{tid_hex}.blob
The local filesystem cache provides fast reads after the first access. It uses LRU eviction with a background daemon thread that removes the oldest files (by access time) when the total size exceeds the configured maximum. The cache is required -- S3 latency makes direct access impractical for ZODB's synchronous access patterns.
During pack(), the base storage is packed first, then S3 is scanned for keys referencing OIDs that are no longer reachable. Orphaned keys are deleted. This also cleans up any objects left behind by failed abort operations.
Ensure your S3 bucket has appropriate access controls (Block Public Access enabled, restrictive bucket policy). The minimum IAM policy required by zodb-s3blobs:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}]
}zodb-s3blobs supports SSE-C (Server-Side Encryption with Customer-Provided Keys).
The S3 service encrypts/decrypts data using your key but never stores it.
Works with AWS S3, Hetzner Object Storage, MinIO (with KES), and other S3-compatible services.
Warning — AWS SSE-C deprecation (April 2026): AWS will disable SSE-C by default on new S3 buckets starting April 2026. Existing buckets are unaffected. For new buckets, you must explicitly enable SSE-C in the bucket policy, or consider migrating to SSE-KMS. If you receive 403 errors with SSE-C configured, this is the likely cause. See the AWS announcement for details.
Generate a 256-bit key:
python -c "import base64, os; print(base64.b64encode(os.urandom(32)).decode())"Configure via environment variable:
s3-sse-customer-key $S3_SSE_KEYImportant: If you lose the key, encrypted data is irrecoverable. SSL is required (enforced at startup).
Security note: The SSE-C key is held in process memory for the lifetime of the storage instance. In long-running servers, consider using IAM-based encryption (SSE-KMS) instead if memory exposure is a concern. Python's string handling makes secure memory clearing impractical.
Warning: The credentials below are MinIO defaults for local development only. Never use default credentials in production.
# docker-compose.yml
services:
minio:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadminCreate the bucket:
mc alias set local http://localhost:9000 minioadmin minioadmin
mc mb local/zodb-blobsgit clone https://github.com/bluedynamics/zodb-s3blobs.git
cd zodb-s3blobs
uv venv
uv pip install -e ".[test]"
pytestFor reproducible deployments (production), pin dependencies with a lockfile:
uv pip compile pyproject.toml -o requirements.txt
uv pip install -r requirements.txtThe source code is managed in a Git repository, with its main branches hosted on GitHub. Issues can be reported there too.
We'd be happy to see many forks and pull requests to make this package even better. We welcome AI-assisted contributions, but expect every contributor to fully understand and be able to explain the code they submit. Please don't send bulk auto-generated pull requests.
Maintainers are Jens Klein and the BlueDynamics Alliance developer team. We appreciate any contribution and if a release on PyPI is needed, please just contact one of us. We also offer commercial support if any training, coaching, integration or adaptations are needed.
ZPL-2.1