Embedded PostgreSQL Server with TRUE Concurrent Connections
npx pgserve and it just works, no credentials needed. Zero config, auto-provision databases, unlimited concurrent connections.
Quick Start β’ Features β’ CLI β’ API β’ Performance
npx pgserveConnect from any PostgreSQL client β databases auto-create on first connection:
psql postgresql://localhost:8432/myappNote: v2 default is the Unix socket β see Daemon mode. The TCP form above is the v1 compat path.
Naming. The npm package stays
pgserve. The CLI now also ships asautopgβ both bins route to the same dispatcher. Useautopgfor the new console (autopg ui) and configuration surface (autopg config,autopg restart);pgserve <subcommand>keeps working as a forever alias. Settings live at~/.autopg/settings.jsonand are migrated from~/.pgserve/automatically on first run. See Console and Configuration.
| Real PostgreSQL 18 | Native binaries, not WASM β full compatibility, extensions support |
| Unlimited Concurrency | Native PostgreSQL process forking β no connection locks |
| Zero Config | Just run pgserve, connect to any database name |
| Auto-Provision | Databases created automatically on first connection |
| Memory Mode | Fast and ephemeral for development (default) |
| RAM Mode | Use --ram for /dev/shm storage (Linux, 2x faster) |
| Persistent Mode | Use --data ./path for durable storage |
| Async Replication | Sync to real PostgreSQL with minimal overhead |
| pgvector Built-in | Use --pgvector for auto-enabled vector similarity search |
| Cross-Platform | Linux x64, macOS ARM64/x64, Windows x64 |
| Any Client Works | psql, node-postgres, Prisma, Drizzle, TypeORM |
# Zero install (recommended)
npx pgserve
# Global install
npm install -g pgserve
# Project dependency
npm install pgservePostgreSQL binaries are automatically downloaded on first run (~100MB).
Download pgserve-windows-x64.exe from GitHub Releases.
Double-click to run, or use CLI:
pgserve-windows-x64.exe --port 5432
pgserve-windows-x64.exe --data C:\pgserve-dataautopg and pgserve are interchangeable β every subcommand routes
through the same dispatcher. Use whichever you prefer; new examples in
this README and in console/ use autopg.
autopg [options] # foreground server (alias: pgserve)
autopg daemon # long-lived background daemon
autopg install [--port N] [--data P] # register pgserve under pm2
autopg uninstall # remove from pm2 (data dir kept)
autopg status # pm2 + on-disk config snapshot
autopg url | autopg port # canonical connection string / port
autopg config <list|get|set|edit|path|init> # manage ~/.autopg/settings.json
autopg restart # pm2-aware: pm2 restart pgserve, else SIGTERM+respawn
autopg ui [--port N] [--no-open] # local web console on 127.0.0.1
Foreground options accepted by autopg / pgserve (no subcommand):
Options:
--port <number> PostgreSQL port (default: 8432)
--data <path> Data directory for persistence (default: in-memory)
--ram Use RAM storage via /dev/shm (Linux only, fastest)
--host <host> Host to bind to (default: 127.0.0.1)
--log <level> Log level: error, warn, info, debug (default: info)
--cluster Force cluster mode (auto-enabled on multi-core)
--no-cluster Force single-process mode
--workers <n> Number of worker processes (default: CPU cores)
--no-provision Disable auto-provisioning of databases
--sync-to <url> Sync to real PostgreSQL (async replication)
--sync-databases <p> Database patterns to sync (comma-separated)
--pgvector Auto-enable pgvector extension on new databases
--max-connections <n> Max concurrent connections (default: 1000)
--help Show help message
Examples
# Development (memory mode, auto-clusters on multi-core)
pgserve
# RAM mode (Linux only, 2x faster)
pgserve --ram
# Persistent storage
pgserve --data /var/lib/pgserve
# Custom port
pgserve --port 5433
# Enable pgvector for AI/RAG applications
pgserve --pgvector
# RAM mode + pgvector (fastest for AI workloads)
pgserve --ram --pgvector
# Sync to production PostgreSQL
pgserve --sync-to "postgresql://user:pass@db.example.com:5432/prod"pgserve@2 ships a singleton daemon that binds a Unix control socket
inside $XDG_RUNTIME_DIR/pgserve (fallback /tmp/pgserve). One daemon
per host serves every consumer on the box β no port conflicts, no
credentials, kernel-rooted identity. Run it under PM2 or systemd so it
restarts automatically.
# Foreground (for debugging)
pgserve daemon
# Stop a running daemon
pgserve daemon stopA second pgserve daemon invocation while the first is running exits with
already running, pid N. A daemon killed with kill -9 leaves an orphan
PID file + socket; the next pgserve daemon boot detects the dead pid and
cleans both up automatically.
Connect from any libpq client (no host/port/user/password required β the daemon authenticates via SO_PEERCRED on accept):
psql -h "${XDG_RUNTIME_DIR:-/tmp}/pgserve" -d myapp
# or via connection URI
psql "postgresql:///myapp?host=${XDG_RUNTIME_DIR:-/tmp}/pgserve"pgserve install registers pgserve as a hardened pm2 process in one
command. Idempotent: re-running it is a no-op when already installed.
pgserve install # one-shot register + start under pm2
pgserve install --port 8442 # custom port
pgserve install --data /data/pg # custom data dir
pgserve url # postgres://localhost:8432/postgres
pgserve port # 8432
pgserve status # pm2 + on-disk config snapshot
pgserve uninstall # remove from pm2; keep data dirHardened defaults (tuned for production-grade Postgres workloads, not toy-machine values):
| Flag | Default | Why |
|---|---|---|
--max-memory-restart |
4G |
Postgres realistic working set: shared_buffers + autovacuum + connection backends. 1G OOM-kills under modest load. Override with PGSERVE_MAX_MEMORY=8G pgserve install. |
--max-restarts |
50 |
Tolerates extended outages (NATS reconnect storms, host pressure). Combined with --min-uptime, only RAPID failures count. |
--min-uptime |
10000 ms |
Restart counts against the cap only when the process crashed within 10s of starting. Healthy long-uptime crashes don't burn the budget. |
--restart-delay |
4000 ms |
Initial gap between restarts. |
--exp-backoff-restart-delay |
100 β ~60000 ms |
Exponential spread on repeated failures so we don't hammer pm2 + the host on persistent issues. |
--kill-timeout |
60000 ms |
Postgres needs time to flush WAL on graceful shutdown; 60s headroom. |
--log-date-format |
YYYY-MM-DD HH:mm:ss.SSS |
Operator-friendly timestamps in pm2 logs. |
--output / --error |
~/.pgserve/logs/pgserve-{out,error}.log |
Rotates via pm2-logrotate (install separately). |
Config: ~/.pgserve/config.json (override the directory with
PGSERVE_CONFIG_DIR). Memory ceiling: env-tunable via
PGSERVE_MAX_MEMORY at install time.
Downstream services that need a Postgres connection can shell out to
pgserve install (no-op if already running) and read the canonical URL
from pgserve url instead of spinning up their own embedded pgserve.
module.exports = {
apps: [{
name: 'pgserve',
script: 'pgserve',
args: 'daemon',
autorestart: true,
max_memory_restart: '1G',
env: { XDG_RUNTIME_DIR: '/run/user/1000' },
}],
};pm2 start ecosystem.config.cjs && pm2 save/etc/systemd/user/pgserve.service:
[Unit]
Description=pgserve daemon
After=default.target
[Service]
Type=simple
ExecStart=/usr/bin/env npx pgserve daemon
Restart=on-failure
RestartSec=5
[Install]
WantedBy=default.targetEnable for the current user:
systemctl --user enable --now pgserve
journalctl --user -u pgserve -fThe systemd user unit inherits XDG_RUNTIME_DIR automatically; the daemon
binds ${XDG_RUNTIME_DIR}/pgserve/control.sock (mode 0600, dir mode 0700)
plus a .s.PGSQL.5432 symlink so off-the-shelf PostgreSQL clients connect
without further configuration.
Each consumer is identified by a kernel-rooted fingerprint derived from
the peer's SO_PEERCRED plus the resolved package.json name, collapsed
to 12 hex chars. The daemon auto-creates one database per fingerprint β
app_<sanitized-name>_<12hex> β and refuses to route a peer into any other
database with SQLSTATE 28P01 invalid_authorization β database fingerprint mismatch.
# What `psql -l` shows on a host with three consumers:
$ psql -h "${XDG_RUNTIME_DIR:-/tmp}/pgserve" -l
Name | Owner | ...
-----------------------+----------+----
app_genie_a1b2c3d4e5f6 | postgres | ...
app_brain_4f3e2d1c0b9a | postgres | ...
app_omni_9876543210ab | postgres | ...Monorepo rule: the root package.json name wins. Every workspace
under it shares one fingerprint and one database β sub-packages do not
get their own. If you need separate isolation, run them from separate
checkouts.
Sanitization: non-[a-z0-9] runs collapse to _, lowercased, truncated
to 30 chars so the final DB name stays within PostgreSQL's 63-char limit.
A name like @scope/foo bar becomes _scope_foo_bar.
Emergency kill switch: PGSERVE_DISABLE_FINGERPRINT_ENFORCEMENT=1
disables enforcement for the daemon process. Use it as a debugging tool
only β every bypassed connection emits an enforcement_kill_switch_used
audit event and the daemon logs a deprecation warning at boot.
Default lifecycle is ephemeral: a database whose liveness_pid is dead
AND whose last_connection_at is older than 24h is dropped on the next GC
sweep (boot, hourly, sampled on-connect). Reaped DBs emit
db_reaped_ttl or db_reaped_liveness audit events.
If your app holds state worth keeping past 24h of idle β genie's wish/agent
store, internal dashboards, anything you'd be unhappy to lose β declare
persistence in package.json:
Persisted databases are never reaped, regardless of liveness or TTL.
Dev workloads with long debug cycles do not normally need this β any new
connection slides the TTL window forward. Reach for pgserve.persist when
the app is genuinely long-lived (production daemon, dashboard, durable
agent state), not just for convenience.
A local web console for inspecting and editing the running cluster.
Runs in-process via node:http, binds 127.0.0.1 only, single-user dev
tool β no auth, no TLS, never expose it.
autopg ui # walk 8433β8533 picking the first free port
autopg ui --port 8500 # bind exactly 8500
autopg ui --no-open # skip browser launch (CI / headless)The first stateful screen β Settings β is functional today: it
renders the 6-section schema (server / runtime / sync / supervision /
postgres / ui), validates inline, and round-trips through
~/.autopg/settings.json with optimistic concurrency (sha256 etag +
If-Match). The other 10 screens (Databases, Tables, SQL, Optimizer,
Security, Ingress, Health, Sync, RLM-trace, RLM-sim) are scaffolded
as [ coming soon ] placeholders β Health ships next.
The UI shells out to the CLI for every mutation (autopg config set
under PUT, autopg restart under POST). The daemon stays untouched
β no HTTP API, no signal-based reload β so the console works even
when no daemon is running.
See console/README.md for the local dev loop
and design-system source.
The CLI is the source of truth. Settings live at
~/.autopg/settings.json (override the directory with
AUTOPG_CONFIG_DIR; the legacy PGSERVE_CONFIG_DIR is still honored
and falls back to ~/.pgserve/). Every write is atomic, chmod 0600,
and tagged with a sha256 etag for optimistic concurrency on the UI
helper's PUT path.
Schema sections (one per ~/.autopg/settings.json top-level key):
| Section | Purpose |
|---|---|
server |
Router port/host, backend socket, superuser credentials |
runtime |
Log level, auto-provision, pgvector, data dir |
sync |
WAL-based logical replication toggle |
supervision |
pm2 hardening defaults (memory, restart, kill timeout) |
postgres |
15 curated GUCs (shared_buffers, wal_level, β¦) + _extra raw passthrough |
ui |
Console theme / phosphor / density / CRT toggle |
autopg config init # write defaults
autopg config list # KEY VALUE SOURCE table
autopg config get postgres.shared_buffers # machine-friendly value
autopg config set postgres.shared_buffers 256MB # validates + atomic write
autopg config edit # opens $EDITOR on settings.json
autopg config path # absolute path (honors AUTOPG_CONFIG_DIR)Precedence: default < file < env. AUTOPG_* env vars beat
PGSERVE_* (the legacy form is still honored with a one-time
deprecation log per process, so existing operators keep working).
The console shows a yellow OVERRIDDEN BY ENV chip on rows whose
env var is currently set.
GUC passthrough: postgres._extra is a free-form { gucName: scalar }
map for any PostgreSQL setting outside the curated 15. Names must match
^[a-z][a-z0-9_]*$; values must be string / number / boolean (no
newlines, no leading -). Both layers are revalidated at boot, so a
typo logs a logger.warn and is dropped β postgres still starts.
One-shot migration: on first run, if ~/.pgserve/ exists and
~/.autopg/ does not, the contents are copied (preserving mtimes)
and a MIGRATED-FROM-PGSERVE.md marker is dropped in the old dir.
Idempotent β second run is a no-op.
Full schema reference: docs/settings-schema.md.
TCP is off by default in v2. Bring it back only when you need it (Kubernetes pods, remote sync, legacy clients that cannot speak Unix sockets) by opting in:
pgserve daemon --listen :5432
# Repeatable for multiple binds:
pgserve daemon --listen :5432 --listen 0.0.0.0:5433TCP peers cannot use SO_PEERCRED, so they must authenticate at
connect time. Issue a bearer token bound to a known fingerprint:
# Prints the token ONCE; the daemon stores only its hash.
pgserve daemon issue-token --fingerprint a1b2c3d4e5f6
# TCP client passes it via libpq application_name:
# ?fingerprint=a1b2c3d4e5f6&token=<bearer>
# Revoke when done:
pgserve daemon revoke-token <token-id>Audit events: tcp_token_issued, tcp_token_used, tcp_token_denied.
Tokens are verified with constant-time compare. Without a valid token a
TCP connection is refused β there is no anonymous TCP path.
Verify no port is bound when --listen is not set:
ss -tlnp | grep pgserve # no rows expectedDaemon-first apps can let the first caller install/start the singleton and then connect through the Unix socket. The daemon derives the app identity from kernel peer credentials and routes it to that app's signed fingerprint database.
import { daemonClientOptions, ensureDaemon } from 'pgserve';
import postgres from 'postgres';
await ensureDaemon({
dataDir: `${process.env.HOME}/.pgserve/data`,
logLevel: 'warn',
});
const sql = postgres(daemonClientOptions());
await sql`SELECT current_database()`;The classic TCP router API remains available for explicit v1-compatible embedded servers:
import { startMultiTenantServer } from 'pgserve';
const server = await startMultiTenantServer({
port: 8432,
host: '127.0.0.1',
baseDir: null, // null = memory mode
logLevel: 'info',
autoProvision: true,
enablePgvector: true, // Auto-enable pgvector on new databases
syncTo: null, // Optional: PostgreSQL URL for replication
syncDatabases: null // Optional: patterns like "myapp,tenant_*"
});
// Get stats
console.log(server.getStats());
// Graceful shutdown
await server.stop();node-postgres
import pg from 'pg';
const client = new pg.Client({
connectionString: 'postgresql://localhost:8432/myapp'
});
await client.connect();
await client.query('CREATE TABLE users (id SERIAL, name TEXT)');
await client.query("INSERT INTO users (name) VALUES ('Alice')");
const result = await client.query('SELECT * FROM users');
console.log(result.rows);
await client.end();Prisma
// prisma/schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}# .env
DATABASE_URL="postgresql://localhost:8432/myapp"
# Run migrations
npx prisma migrate devDrizzle
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({
connectionString: 'postgresql://localhost:8432/myapp'
});
const db = drizzle(pool);
const users = await db.select().from(usersTable);Sync ephemeral pgserve data to a real PostgreSQL database. Uses native logical replication for zero performance impact on the hot path.
# Sync all databases
pgserve --sync-to "postgresql://user:pass@db.example.com:5432/mydb"
# Sync specific databases (supports wildcards)
pgserve --sync-to "postgresql://..." --sync-databases "myapp,tenant_*"Replication is handled by PostgreSQL's WAL writer process, completely off the runtime event loop. Sync failures don't affect main server operation.
pgvector is built-in β no separate installation required. Just enable it:
# Auto-enable pgvector on all new databases
pgserve --pgvector
# Combined with RAM mode for fastest vector operations
pgserve --ram --pgvectorWhen --pgvector is enabled, every new database automatically has the vector extension installed. No SQL setup required.
Using pgvector
-- Create table with vector column (1536 = OpenAI embedding size)
CREATE TABLE documents (id SERIAL, content TEXT, embedding vector(1536));
-- Insert with embedding
INSERT INTO documents (content, embedding) VALUES ('Hello', '[0.1, 0.2, ...]');
-- k-NN similarity search (L2 distance)
SELECT content FROM documents ORDER BY embedding <-> $1 LIMIT 10;See pgvector documentation for full API reference.
Without --pgvector flag
If you don't use --pgvector, you can still enable pgvector manually per database:
CREATE EXTENSION IF NOT EXISTS vector;pgvector 0.8.1 is bundled with the PostgreSQL binaries. Supports L2 distance (
<->), inner product (<#>), and cosine distance (<=>).
| Scenario | SQLite | PostgreSQL | pgserve 1.2.0 | pgserve v2 | pgserve v2 --ram |
|---|---|---|---|---|---|
| Concurrent Writes (10 agents) | 91 qps | 204 qps | 1,667 qps | 2,273 qps | 4,167 qps π |
| Mixed Workload | 383 qps | 484 qps | 507 qps | 1,133 qps | 2,109 qps π |
| Write Lock (50 writers) | 111 qps | 228 qps | 2,857 qps | 3,030 qps | 4,348 qps π |
| Metric | PostgreSQL | pgserve 1.2.0 | pgserve v2 | pgserve v2 --ram |
|---|---|---|---|---|
| Vector INSERT (1000 Γ 1536-dim) | 152/sec | 392/sec | 387/sec | 1,082/sec π |
| k-NN Search (k=10, 10k corpus) | 22 qps | 33 qps | 31 qps | 30 qps |
| Recall@10 | 100% | 100% | 100% | 100% |
Why pgserve wins on writes: RAM mode uses
/dev/shm(tmpfs), eliminating fsync latency. Vector search is CPU-bound, so RAM mode shows minimal benefit there.
| Engine | CRUD QPS | Vec QPS | Recall | P50 | P99 | Score |
|---|---|---|---|---|---|---|
| SQLite | 195 | N/A | N/A | 6.3ms | 17.3ms | 117 |
| pgserve 1.2.0 | 305 | 65 | 100% | 3.3ms | 7.0ms | 209 |
| PostgreSQL | 1,677 | 152 | 100% | 6.0ms | 19.0ms | 1,067 |
| pgserve v2 | 2,145 | 149 | 100% | 5.3ms | 13.0ms | 1,347 |
| pgserve v2 --ram | 3,541 | 381 | 100% | 3.3ms | 10.7ms | 2,277 π |
Methodology: Recall@k measured against brute-force ground truth (industry standard). PostgreSQL baseline is Docker
pgvector/pgvector:pg18. RAM mode available on Linux and WSL2.Run benchmarks yourself:
bun tests/benchmarks/runner.js --include-vector
|
|
|
|
- Runtime: Node.js >= 18 (npm/npx)
- Platform: Linux x64, macOS ARM64/x64, Windows x64
Contributors: This project uses Bun internally for development:
# Install dependencies
bun install
# Run tests
bun test
# Run benchmarks
bun tests/benchmarks/runner.js
# Lint
bun run lintContributions welcome! Fork the repo, create a feature branch, add tests, and submit a PR.
MIT License β Copyright (c) 2025 Namastex Labs
Made with love by Namastex Labs
{ "name": "my-long-lived-app", "pgserve": { "persist": true } }