AI workflows for Obsidian: chat, inline writing help, agent tool use, MCP, semantic search, local skills, and optional local inference.
LLMSider turns Obsidian into an AI workspace instead of just a chat box. It supports day-to-day writing assistance, guided task execution, autonomous tool use, vault search, external MCP tools, and local-first options such as Ollama and WebLLM.
Core capabilities:
- Multi-provider connections with separate model management
- Normal chat, Agent mode, and Superpower for interactive, robust workflows
- Quick Chat, selection popup, and autocomplete inside the editor
- Rich context input: notes, folders, images, PDFs, Office files, YouTube transcripts, pasted text
- Unified Tool Manager: Seamlessly manages 100+ built-in tools and external MCP servers
- Semantic search, similar notes, and vault-aware context enhancement
- Local skills, prompt management, and memory controls
- Prompt Optimization: One-click enhancement of your message for better AI results
- Advanced Speed Reading: Real-time summaries, interactive mind maps (SVG/PNG/MD export), and custom analysis
- Local Inference: Optional browser-side inference with
WebLLM(WebGPU) - Automatic Updates: Built-in version checker and one-click plugin update
LLMSider uses a Connection + Model architecture. You can configure multiple connections and attach multiple models to each connection, then switch models from chat without rebuilding your setup.
Current connection types in the codebase include:
- OpenAI
- Anthropic
- Azure OpenAI
- GitHub Copilot
- Gemini
- Groq
- xAI
- OpenRouter
- OpenAI-compatible endpoints
- SiliconFlow
- Kimi
- Ollama
- Qwen
- Free Qwen
- Free DeepSeek
- Hugging Chat
- OpenCode
- WebLLM (Beta)
LLMSider currently supports:
Normal Modefor direct chat and editing helpAgent Modefor autonomous tool callingSuperpower: A layer on Normal mode that provides step-by-step guidance, interactive questions, and robust tool execution with error recovery. It works even when tools are disabled, acting as a guided dialogue.
The chat view also keeps per-session state for model choice, context, guided goal, and active skill.
Quick Chat: inline AI actions withCmd+/Selection Popup: floating actions for selected text, including quick chat and add-to-contextAutocomplete: Copilot-style inline suggestions for writing and code- Diff preview and one-click apply for AI-generated edits
You can send context from:
- Markdown notes and folders
- Selected text
- Images
- PDF files
- Office documents
- YouTube URLs
- Pasted text and links
LLMSider can also auto-include the current note, recommend related files, and use vector search to improve retrieval from your vault.
- Unified Tool Manager: A central engine to manage and execute 100+ built-in tools and external MCP servers with consistent schemas and real-time permission controls.
- Dynamic Built-in Tools: Extensible tool system covering note operations, web search, content fetching, finance, news, and utilities.
- MCP server management with per-server and per-tool control
- Local skills directory, per-skill enable/disable, default skill selection, and skill market UI
- Semantic search and similar-note discovery
- Memory settings for conversation handling
- WebLLM support for local browser inference on compatible devices
Community Plugins
Open Obsidian Settings -> Community Plugins -> Browse -> search for LLMSider -> Install -> Enable.
BRAT
- Install BRAT
- Click
Add Beta Plugin - Enter
gnuhpc/obsidian-llmsider - Enable the plugin
Manual
Download the latest release from GitHub Releases, extract it into YourVault/.obsidian/plugins/llmsider/, reload Obsidian, then enable LLMSider.
- Open
Settings -> LLMSider - Add a connection
- Add at least one chat model under that connection
- Open
LLMSider: Open Chat - Optionally enable MCP, vector search, skills, or WebLLM
- Documentation Index
- Connections & Models
- Chat Interface
- Conversation Modes
- Quick Chat
- Selection Popup
- Autocomplete
- Context Reference
- Built-in Tools
- MCP Integration
- Search Enhancement
- Speed Reading
- Settings Guide
中文文档:
WebLLM (Beta)requires browser/WebGPU support and compatible hardware.OpenCoderequires the local OpenCode server or CLI setup.- Built-in tools and MCP tools have separate permission controls in settings.