A BEAM-native personal autonomous AI agent built on Elixir/OTP.
AlexClaw monitors the world (RSS feeds, web sources, GitHub repositories, APIs), accumulates knowledge, executes workflows autonomously on schedule, and communicates with its owner via Telegram. It routes every task to the cheapest available LLM that satisfies the required reasoning tier β including fully local models.
Designed as a single-user personal agent. Not a platform. Not a marketplace. One codebase, fully auditable, running on your infrastructure.
"I didn't plan most of this. I just kept solving the next problem."
- Multi-Model LLM Router β Tier-based routing (
light/medium/heavy/local) with priority-based selection. All providers (cloud and local) are stored in PostgreSQL and fully manageable from the admin UI. Tracks daily usage per provider in ETS. Ships with default providers (Gemini, Claude, Ollama, LM Studio) seeded on first boot β add, remove, or reconfigure any provider at runtime. - Workflow Engine β Define multi-step linear pipelines with conditional branching. Each skill declares its possible outcomes (branches), and the executor routes to different steps based on which branch fires. Execution is sequential β one path per run, no fan-out (a step cannot broadcast to multiple parallel successors). Notify skills pass through their input unchanged, enabling chained delivery to multiple channels in the same pipeline. Per-step resilience controls (circuit breaker, missing skill handling, fallback routing). Zero LLM tokens spent on routing β pure deterministic pattern matching. Full run history with branch path visualization.
- OTP Circuit Breaker β Per-skill circuit breaker using GenServer + ETS. After consecutive failures a skill is temporarily disabled (circuit open), then automatically re-tested after a cooldown. Telegram notifications on state transitions. Dead letter routing: workflow steps can skip, halt, or fallback to an alternative skill when a circuit is open or a skill is missing. Zero external dependencies β pure OTP.
- Multi-Gateway (Telegram + Discord) β Bidirectional communication via Telegram long-polling or Discord bot WebSocket. Command routing is deterministic pattern-matching β no LLM involved in dispatch. Both gateways can run simultaneously; responses route back to the originating transport. The Gateway behaviour allows adding new transports without changing skills or the Dispatcher.
- Runtime Configuration β All settings (API keys, prompts, limits, personas) are stored in PostgreSQL, cached in ETS, and editable at runtime via the admin UI. No restart required for any config change.
- Persistent Memory with Semantic Search β PostgreSQL + pgvector for knowledge storage. Deduplication by URL. Hybrid search combines vector cosine similarity and keyword matching β vector results are prioritized, keyword results fill gaps for exact matches. Embeddings are generated asynchronously via the LLM router (Gemini
gemini-embedding-001, Ollamanomic-embed-text, or any OpenAI-compatible endpoint). 768-dimension vectors with HNSW index. All skills that store knowledge auto-embed in the background. - Knowledge Base RAG β Separate
knowledge_entriestable for documentation and reference material, isolated from news/conversation memory. Scraper skills fetch, chunk, and embed documentation from hexdocs.pm (API reference + official guides). Chat integrates both Knowledge and Memory search with a context source selector (Docs only / Memory only / Both / None). System prompt instructs the LLM to cite provided documentation over general knowledge. Currently covers 22 Elixir ecosystem packages including full Elixir stdlib and 53 official guides. - Cron Scheduler β Quantum-based. Jobs defined in config or DB.
| Skill | Description |
|---|---|
rss_collector |
Fetch RSS feeds, deduplicate, score relevance via LLM, notify. Configurable fetch timeout (global + per-step) |
web_search |
Search the web and synthesize answers |
web_browse |
Fetch and summarize a URL, or answer questions about it |
research |
Deep research with memory context |
conversational |
Free-text LLM conversation |
telegram_notify |
Send a Telegram message as a workflow step |
discord_notify |
Send workflow output to a Discord channel. Configurable channel_id per step β deliver to different channels in the same workflow |
llm_transform |
Run a prompt template through the LLM (workflow glue step) |
api_request |
Make an authenticated HTTP request |
github_security_review |
Fetch PR/commit diff, run LLM security analysis |
google_calendar |
Fetch upcoming Google Calendar events |
google_tasks |
Manage Google Tasks lists and items |
shell |
Execute whitelisted OS commands for container introspection (2FA-gated) |
web_automation |
Browser automation via headless Playwright sidecar (experimental) |
hexdocs_scraper |
Scrape hexdocs.pm docs into knowledge base embeddings (dynamic) |
This feature is under heavy development. The API, permission model, and sandboxing may change without notice.
Load custom skills at runtime β no code changes, no Docker rebuild, no restart. Drop an .ex file into the skills volume (or upload via the admin UI), and it compiles into the running VM immediately.
- Permission sandbox β Dynamic skills declare permissions (
llm,web_read,telegram_send,memory_read,memory_write,config_read,resources_read,skill_invoke) and interact throughSkillAPIonly. Undeclared permissions are denied at runtime. - Namespace enforcement β Module must be
AlexClaw.Skills.Dynamic.* - Integrity verification β SHA256 checksum stored on load, verified on boot. Tampered files are skipped with a Telegram alert.
- Persistence β Dynamic skills survive container restarts (DB + Docker volume)
- Admin UI β Upload, reload, and unload skills from the Skills page. Core and dynamic skills are shown separately.
- Telegram commands β
/skill load,/skill unload,/skill reload,/skill create,/skill list - Cross-skill invocation β Dynamic skills can call other skills (core or dynamic) through
SkillAPI.run_skill/3 - Conditional branching β Dynamic skills can declare
routes/0(e.g.[:on_results, :on_empty, :on_error]) and return triple tuples{:ok, result, :branch_name}for workflow routing. Routes are persisted in the database on load and cleaned up on unload β same behavior as core skills.
| Permission | Grants access to |
|---|---|
:llm |
LLM completion, system prompt |
:web_read |
HTTP GET, POST, and arbitrary requests |
:telegram_send |
Send Markdown or HTML messages to Telegram |
:memory_read |
Search, check existence, list recent memories |
:memory_write |
Store new memory entries |
:config_read |
Read runtime config values |
:resources_read |
List and fetch resources |
:knowledge_read |
Search and check existence in knowledge base |
:knowledge_write |
Store knowledge entries |
:skill_invoke |
Call other skills by name |
See test/fixtures/skills/skill_template.ex for a fully documented template with the complete SkillAPI reference. Dynamic skill examples (RSS with full article fetching, NVD CVE Monitor, Research, GitHub Review, Web Search, Web Browse) are available in the same directory.
AlexClaw can review pull requests and commits for security issues:
- Run as a workflow step with per-workflow repo, token, and security focus
- Trigger manually via Telegram:
/github pr owner/repo 42 - GitHub webhook endpoint available (
/webhooks/github) with HMAC-SHA256 verification - Diff truncation at 24KB β works with local models
- Structured output: RISK LEVEL, FINDINGS, SUMMARY, RECOMMENDATION
- Health endpoint β
GET /health(unauthenticated) returns{"status":"ok","version":"...","db":"connected"}for load balancers and Docker healthchecks. Returns HTTP 503 when the database is unreachable. - Metrics endpoint β
GET /metrics(authenticated) returns a JSON payload with system stats (uptime, memory, BEAM processes), LLM provider usage, workflow run counts, skill and circuit breaker states, log severity counts, and knowledge/memory entry counts.
- Session-based authentication β all routes except
/loginand/healthrequire an authenticated session - Two-Factor Authentication (2FA) β TOTP-based via authenticator apps. Setup and confirmation via Telegram (
/setup 2fa,/confirm 2fa) - Built-in login rate limiting β ETS-based, configurable max attempts and block duration, adjustable at runtime without restart
- HMAC-SHA256 webhook verification β GitHub webhook endpoint uses
Plug.Crypto.secure_comparefor timing-safe signature validation - Encryption at rest β API keys and tokens are AES-256-GCM encrypted in PostgreSQL, decrypted transparently at runtime
- Sensitive key masking β API keys and tokens show partial values in the admin UI
- Shell command security β 5-layer defense: disabled by default, 2FA gate, whitelist with word-boundary check, blocklist for shell metacharacters, no shell interpretation (
System.cmd/3with args as list), configurable timeout + output truncation
Telegram <ββ> TelegramGateway βββ
Discord <ββ> DiscordGateway βββΌββ> Router ββ> Dispatcher ββ> Skills
β
Admin UI (Chat) ββββββ> SkillSupervisor ββ> Dynamic Skills
(DynamicSupervisor)
β
ββββββββββββββββΌβββββββββββββββ
RSS Research NVD CVE
Skill Skill Monitor
β
LLM Router
(Gemini / Anthropic / Ollama / LM Studio)
β
βββββββββββββ΄ββββββββββββ
Memory Config
(pgvector + embeddings) (DB + ETS + PubSub)
β semantic search β
GitHub Webhook ββ> WebhookController ββ> GitHubSecurityReview
Scheduler (Quantum) ββ> Workflows.Executor βββ¬ββ> CircuitBreaker ββ> Skills ββ> Branch Router
Phoenix LiveView Admin ββ> all of the above βββ> Fallback / Skip / Halt βββ> Next Step
Every skill runs as an isolated OTP process. Crashes are contained and supervised. The circuit breaker wraps each skill transparently β skills have zero awareness of it. The Dispatcher is deterministic pattern-matching β no LLM token cost for routing.
See ALEXCLAW_ARCHITECTURE.md for the full design document.
git clone https://github.com/thatsme/AlexClaw.git
cd AlexClaw
cp .env.example .env
# Edit .env β set DATABASE_PASSWORD, SECRET_KEY_BASE, ADMIN_PASSWORD,
# TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID, and at least one LLM API key
docker compose up -dOpen http://localhost:5001 and log in with your ADMIN_PASSWORD.
Send /ping to your Telegram bot to verify connectivity.
For detailed setup instructions, Telegram bot setup, and local model configuration, see INSTALLATION.md.
All configuration is managed at runtime through the admin UI (/config). On first boot, values are seeded from environment variables. After that, changes are made in the UI β no restart needed.
| Variable | Description |
|---|---|
DATABASE_PASSWORD |
PostgreSQL password |
SECRET_KEY_BASE |
Phoenix session secret (mix phx.gen.secret) |
ADMIN_PASSWORD |
Web interface login password |
TELEGRAM_BOT_TOKEN |
From @BotFather |
TELEGRAM_CHAT_ID |
Your Telegram chat ID |
| Variable | Description |
|---|---|
GEMINI_API_KEY |
Google Gemini (free tier available) |
ANTHROPIC_API_KEY |
Anthropic Claude |
OLLAMA_ENABLED=true + OLLAMA_HOST |
Local Ollama instance |
LMSTUDIO_ENABLED=true + LMSTUDIO_HOST |
Local LM Studio instance |
| Variable | Description |
|---|---|
DISCORD_ENABLED=true |
Enable the Discord gateway |
DISCORD_BOT_TOKEN |
Discord bot token from Developer Portal |
DISCORD_CHANNEL_ID |
Channel ID for commands (auto-detected on first message) |
DISCORD_GUILD_ID |
Server (guild) ID |
All other settings (GitHub tokens, webhook secrets, LLM limits, prompts, skill config) are managed at runtime through the Config UI after first boot.
See .env.example for the full list of bootstrap variables.
| Tier | Default providers | Typical use |
|---|---|---|
light |
Gemini Flash, Claude Haiku | RSS scoring, classification, simple tasks |
medium |
Gemini Pro, Claude Sonnet | Summarization, research, security review |
heavy |
Claude Opus | Deep reasoning (explicit only) |
local |
LM Studio, Ollama | Privacy-sensitive content, offline use, zero cost |
All providers live in the database and can be added, removed, or reconfigured from the admin UI. The defaults above are seeded on first boot. The router selects by priority within each tier (lower priority number = preferred), tracks daily usage, and falls back to the next available provider. A fully local deployment with no API keys is supported β enable a local provider and all tiers will fall back to it.
| Command | Description |
|---|---|
/ping |
Check if the bot is alive |
/status |
System status (uptime, memory, active skills) |
/skills |
List registered skills (core + dynamic) |
/skill load <file> |
Compile and register a dynamic skill |
/skill unload <name> |
Remove a dynamic skill |
/skill reload <name> |
Recompile a dynamic skill |
/skill create <name> |
Generate a skill template file |
/llm |
Show LLM provider status |
/workflows |
List all workflows with status and ID |
/run <id or name> |
Run a workflow on demand |
/research <query> |
Deep research with memory context |
/search <query> |
Web search and synthesis |
/web <url> |
Fetch and summarize a URL |
/web <url> <question> |
Answer a question about a URL |
/github pr <owner/repo> [number] |
Security review a PR |
/github commit <owner/repo> <sha> |
Security review a commit |
/events |
Show today's Google Calendar events |
/events add <title> <date> <time> |
Create a calendar event |
/tasks |
List Google Tasks |
/tasklists |
List your task lists by name |
/task add <title> |
Add a task to Google Tasks |
/shell <command> |
Execute a whitelisted OS command (2FA-gated) |
/record <url> |
Start browser recording session (web-automator) |
/record stop <session_id> |
Stop a recording session |
/automations |
List automation resources |
/setup 2fa |
Set up two-factor authentication |
/confirm 2fa <code> |
Confirm 2FA with authenticator code |
/google auth |
Start Google OAuth flow via Telegram |
/help |
Show all commands |
| any text | Free-text conversation |
| Page | Description |
|---|---|
| Dashboard | System status, recent activity |
| Chat | Interactive conversation with semantic memory search β pick any provider (cloud or local) |
| Skills | Core and dynamic skills β upload, reload, unload |
| Scheduler | Cron jobs and scheduled workflows |
| LLM | Provider status and usage |
| Workflows | Create/edit/run multi-step pipelines, view run history |
| Resources | Shared resources for workflows (RSS feeds, websites, APIs, automations) |
| Memory | Browse and search stored knowledge |
| Database | Schema browser and backup download |
| Config | Runtime configuration editor |
| Logs | Real-time log viewer with severity filtering |
lib/
alex_claw/
config/ # Runtime config (DB + ETS + PubSub broadcast)
knowledge/ # Knowledge base entry schema (pgvector)
llm/ # LLM router, usage tracker, provider schema
memory/ # Memory entry schema
skills/ # Core skill modules, SkillAPI, DynamicSkill schema, CircuitBreaker
workflows/ # Executor, scheduler sync, SkillRegistry (GenServer+ETS), step/run schemas
dispatcher.ex # Deterministic message routing
gateway.ex # Telegram bot
identity.ex # Agent persona and system prompts
llm.ex # Multi-model router
memory.ex # Knowledge store
rate_limiter.ex # ETS-based login rate limiting
scheduler.ex # Quantum cron scheduler
alex_claw_web/
controllers/ # Auth, database backup, GitHub webhook
live/admin_live/ # LiveView admin pages (12 pages including Chat)
plugs/ # RequireAuth, RateLimit, RawBodyReader
priv/repo/
migrations/ # All DB migrations
seeds/ # Example workflow seeds
- Semantic search requires an embedding provider. Vector search works when at least one embedding-capable provider is configured (Gemini, Ollama, or OpenAI-compatible). Without one, memory falls back to keyword search. Configure via
embedding.providerandembedding.modelin the admin UI. - Single-user only. There is no multi-user access control. The authentication model assumes one trusted operator.
- Sensitive config encrypted at rest. API keys and tokens are AES-256-GCM encrypted in PostgreSQL using
SECRET_KEY_BASEas key material. ChangingSECRET_KEY_BASErequires re-entering all API keys. See SECURITY.md for details. - Web Automator is experimental. The browser automation sidecar (
web_automationskill) is under heavy development. APIs, config format, and recording workflow may change without notice.
AST-level analysis reports generated by Giulia β heatmap zones, change risk, blast radius, coupling analysis, dead code, and architecture health.
| Version | Report | Key Findings |
|---|---|---|
| v0.3.0 | AlexClaw_REPORT_v0.3.0_2026031912.md | 0 red zones, 0 cycles, 100% spec coverage, 3 P2 recommendations |
See SECURITY.md for the full security policy and deployment hardening guidance.
See CONTRIBUTING.md for contribution guidelines and CLA.md for the Contributor License Agreement.
Copyright 2026 Alessio Battistutta β Licensed under the Apache License, Version 2.0. See LICENSE for details.




