OpenSail v0.1.1
The open platform for building AI apps, agents, workflows, and automations you can inspect, run, share, and own.
OpenSail turns recurring work from Slack, email, spreadsheets, tickets, approvals, and internal tools into runnable software. Describe the workflow in plain English and OpenSail produces a real runnable system with a trigger, an agent or app action, selected tools and connectors, delivery targets, approval gates, budget limits, run history, and a sandboxed workspace when the workflow needs files, code, or services.
It is for anyone with a process that keeps coming back: a founder chasing follow-ups, an operator buried in handoffs, a lawyer managing intake and documents, a support team routing issues, a developer building internal tools, or a company giving people a sanctioned place to build useful AI.
Highlights
Agents you can build a fleet of
- Agents operate inside real workspaces with files, terminals, containers, previews, Git, artifacts, and deploy targets
- Schedule agents for daily reports, weekly reviews, customer follow-ups, support triage, billing checks, or monitoring tasks
- Each run keeps trigger, status, outputs, cost, touched systems, and approval history
- Invoke from chat, Slack, email, webhook, schedule, app event, or API call
- Frontend, backend, test, ops, research, and review agents can share the same workspace
Apps as first-class installable units
- Versioned, manifest-described bundles with content-addressed publishing and an approval pipeline
- Typed actions (JSON-schema-validated functions), embeddable views, cached data resources, and app-to-app dependencies
- Connectors with proxy mode so app code never sees raw secrets
- Per-dimension billing (creator-pays, installer-pays, BYOK, platform-subsidized) with promotional budgets and caps
- Forking with full source access and provenance, plus bundles for starter packs
Automation Runtime
- One runtime for triggers, agents, apps, delivery, approvals, and spend
- Mental model: Trigger to Event to Run to Action to Delivery
- Three action kinds:
agent.run,app.invoke, andgateway.send - Per-automation contracts gate allowed tools, MCP servers, apps, compute tier, approval rules, and spend caps
- Risky steps pause at approval boundaries and resume from checkpoints after a human decision
- Approve from Slack, email, or the web app
Conversational builders
@agent-builderdrafts a user-owned agent with name, instructions, model, connected MCPs, skills, and tool permissions@automation-builderdrafts a cron schedule for one of your existing agents with prompt, delivery target, compute tier, and spend cap- Service Integrator wires connectors and channels before a workflow goes live
- Every published change goes through an in-chat review card that publishes only on click
Workspaces and sandboxes
- BtrFS-backed snapshot filesystem with up to 5 retained snapshots per project
- Instant fork, rollback to any point, and branch off a working agent without breaking what is already running
- Three-tier compute model on Kubernetes
- Tier 0 for file ops, web calls, and agent reasoning
- Tier 1 for warm ephemeral containers that execute and return to a pool
- Tier 2 for full Kubernetes namespaces with multi-container environments and live previews
- Hibernation snapshots the entire shared volume and restores all containers atomically
Architecture Panel
- Visual node-graph canvas built on React Flow
- Container, browser preview, deployment target, and hosted agent nodes
- Edge types for env injection, HTTP, database, cache, browser preview, and deployment
- Both humans and agents read and write the same
.tesslate/config.json - Publish serializes the graph into the manifest, install restores it into a new project
Agentic coding surface
- Monaco editor, terminal attached to the running container, file tree, live preview with hot module reload, Git panel with diff, blame, history, and branch switching
- Shared context with agents: same tree, files, shell output, app preview, and architecture graph
- Reviewable diffs you can accept, revise, or keep editing
- Kanban with TSK refs inside the project, hand tasks to agents and watch them close
- Long-running context with progressive compaction past 80 percent of the model window
- Progressive persistence so sessions resume across browser reloads, worker restarts, and network changes
Design Engineer
- Click any pixel in your running app and jump to the JSX line that rendered it
- React Fiber walking, stable OID mapping, two-way sub-100ms sync between inspector and source
- Tailwind autocomplete, interactive box model, color pickers, HTML attribute editing
- Insert palette with semantic HTML, project components, and framework patterns for React, Next.js, Vue, Svelte, Angular, and Astro
Connectors
- MCP-native, plus REST endpoints for anything else
- Slack, Gmail, Google Drive, Linear, Jira, Notion, GitHub, Salesforce, HubSpot, Confluence, internal APIs, and more
- Build your own connectors and publish them for your team
Agent skills
- Reusable capabilities loaded progressively (catalog in context, body on demand)
- A data analysis pipeline, a writing style, a code review checklist, a research methodology, a report template, anything
Desktop App
- Tauri v2 shell wrapping the same orchestrator as the cloud
- Local SQLite database under
OPENSAIL_HOME, zero network dependency by default - Three runtimes per project: local subprocesses, Docker Compose, or a remote Kubernetes cluster
- Cloud pairing for Codex-style cloud sandboxing from your own machine
- Offline-first marketplace with SHA-256 verified downloads
- Per-project permissions in
.tesslate/permissions.jsonwith allow, deny, and ask policies - Tray approval cards with TSK-numbered tickets
- Adopt any directory on your machine as a project
Model Providers
- Every call routes through LiteLLM
- Anthropic, OpenAI, DeepSeek, Meta, Mistral, Qwen, Google, Moonshot, MiniMax, Z.AI (ChatGLM), and xAI
- BYOK routes model usage to your provider account
- Self-hosted Ollama or vLLM for fully air-gapped open-weight models
Deployment targets
- Ship to 22 providers by drawing an edge on the Architecture Panel
- Vercel, Netlify, Cloudflare Pages, DigitalOcean App Platform, Railway, Fly.io, Heroku, Render, Koyeb, Zeabur, Northflank
- GitHub Pages, Surge, Deno Deploy, Firebase Hosting
- AWS App Runner, GCP Cloud Run, Azure Container Apps, DigitalOcean Container Apps
- Docker Hub, GitHub Container Registry, Download or Export
Communication gateways
- Deploy agents to Slack, Telegram, Discord, WhatsApp, Signal, and CLI WebSocket
- Hot-reloaded adapters, per-schedule delivery routing
Themes and whitelabel
- Restyle the entire UI without touching code
- Run OpenSail as your company's all-in-one AI platform with your own brand and curated marketplace
Gateway API and MCP Server
- External users (agents or humans) interact with your OpenSail instance via API key
- MCP Server (in development) lets external coding agents connect for sandboxed compute and publish apps directly
Run OpenSail
| You want to | Use this path |
|---|---|
| Try OpenSail or develop the web app locally | Docker Compose |
| Let a script set up macOS dependencies | Guided macOS installer |
| Run the desktop app | Desktop release or desktop dev mode |
| Test the real Kubernetes runtime locally | Minikube |
| Run your own production instance | AWS EKS Terraform plus Kustomize |
git clone https://github.com/TesslateAI/OpenSail.git
cd OpenSail
cp .env.example .env
docker compose up -dOpen http://localhost. API docs at http://localhost:8000/docs.
Full setup paths and the production guide live in the README.
Why open source
Workspace agents touch your data, your tools, and your processes. You should be able to see exactly what they are doing, run them on your own infrastructure, and choose the model provider that fits the work. OpenSail is open-source infrastructure you can operate directly. Your data, your models, your infrastructure, your cloud, your code, your control.
License
Apache 2.0