Releases: aristoteleo/PantheonOS
Releases · aristoteleo/PantheonOS
v0.5.0
Pantheon v0.5.0
Highlights
Workspace Isolation Mode — Each chat session can now optionally use its own private workspace directory, isolating files, code execution, and shell commands from other sessions.
New Features
- Two-mode workspace system (
project/isolated): Default isproject(shared), users can toggle toisolatedmode where each chat gets a private directory at.pantheon/workspaces/{session_id}/(#39, @1-CellBio) - Runtime workspace mode switching via new
set_chat_workspace_mode()tool — switch between project and isolated mode without creating a new chat - All ToolSets are workspace-aware:
file_manager,file_transfer,shell,notebook,python_interpreterall respect the active workspace mode - Store CLI: Publish, install, search, and manage agent/team/skill packages (
pantheon store ...) - Store seed system: Batch import packages from factory templates and external repos (LabClaw, OpenClaw Medical, Claude Scientific, ClawBio, OmicClaw)
PANTHEON_HUB_URLenv var support for Store API client- Evolution system improvements: Better evaluator format, real-time visualization, improved error reporting
- CLI
-iflag for initial input - Activity-aware idle cleanup via
_pingresponses - Custom endpoints: Support for
custom_anthropicandcustom_openaiproviders
Bug Fixes
- Fix
/keyscommandProviderMenuEntryunpacking error - Fix
move_file()crash by creating parent directories before move - Fix empty file
read_filecrash (start_lineout of range for 0-line files) - Fix
ws://127.0.0.1→ws://localhostfor Windows IPv6 compatibility - Fix LaTeX compilation to resolve relative figure paths
- Fix shell env over
max_sizeissue - Fix endpoint startup timeout (3s → 30s)
Desktop App (PyInstaller)
- Switch to
onedirmode withexclude_binaries=Truefor correct macOS bundling - Fix nats-server binary missing from bundle
- Fix Windows DLL corruption and read-only install location issues
- Handle fakeredis/lupa compatibility
Contributors
- @Nanguage
- @1-CellBio — session workspace isolation (#39)
- @zqbake
v0.4.10
v0.4.10: Pantheon Store, Custom Endpoints, Docker & Bug Fixes
Pantheon Store
- Store CLI (
pantheon store) for publishing, installing, searching, and managing skill/agent/team packages - Store seed system for batch-importing packages from external repos (LabClaw, OmicClaw, ClawBio, Claude Scientific)
- Install state tracking with local manifest (
~/.pantheon/store_installs.json) - RPC methods (
install_store_package,get_installed_store_packages) for frontend Store dialog integration - Skill file bundling with SKILL.md directory format and source attribution
Custom Endpoints
- Flexible
custom_modelsconfiguration replacing the old unifiedLLM_API_BASE/LLM_API_KEYproxy - Support
CUSTOM_ANTHROPIC_*andCUSTOM_OPENAI_*environment variables - Interactive
/keyscommand for custom endpoint management - Custom model aliases with highest priority in provider detection
Docker
- Dual-mode Dockerfile with GitHub Actions CI/CD workflow
- Docker entrypoint supporting both CLI and UI modes
CLI
-iflag for initial input:pantheon cli -i "analyze my data"starts REPL with an immediate command
Infrastructure
- Activity-aware idle cleanup —
_pingresponses now report active threads/tasks so Hub can skip cleanup for busy pods - Shell env fix for oversized environment variables in package runtime
- Setup wizard
SKIP_SETUP_WIZARDenv var support
Bug Fixes
- LaTeX PDF compilation — compile in source directory so relative figure paths (
../figures.png) resolve correctly - Speech-to-text — handle base64-encoded audio over JSON transport; add transcription timeout and spinner UI
- Context menu z-index — teleport WorkspacePanel context menu to body so it renders above chat cards
- Remove unused
FilesystemTab.vuelegacy component - Skip Hub API call in local mode to avoid connection errors
- Fix test failures and Docker Hub repo name
v0.4.9
v0.4.9: CLI feedback flow, think tool display, structured questions
CLI Improvements
- Unified Review Dialog with file tabs, structured questions, and text input
- Provide Feedback flow for plan rejection — feedback is sent directly to the agent
- Think tool displays thought preview with 💭 icon instead of just showing "think"
- Notification messages rendered as Markdown (Rich library)
- Backend version exposed in
_pingRPC response
Task Workflow
- Structured questions in
notify_user(single/multiple choice, text input) - Questions auto-interrupt to wait for user answers
Agent & Team
- Improved team delegation prompt
- FastMCP 3.x namespace API compatibility
- New agentic general prompt template
Bug Fixes
- Show "✓ Approved" instead of confusing "Submitted 0 answer(s)" for file-only reviews
v0.4.8
What's New
Background Task Management UI
- Add background tasks section in Timeline sidebar with collapsible list/detail views
- Show running/completed task counts with animated indicators
- Support cancel and remove actions on tasks
- Auto-refresh via NATS streaming events
- Click-to-navigate from tool calls to background task details
- Compact notification line in timeline when tasks complete
- Show spinning indicator on chat sessions with running background tasks
CLI Improvements
- Show compact colored notification in REPL when background tasks complete
- Hide bg task notification messages from chat history replay
Bug Fixes
- Fix NATS payload overflow when loading chats with large tool responses (drop oversized raw_content >50KB)
- Fix FastMCP mount() compatibility for both 2.x (prefix) and 3.x (namespace)
- Wrap bg task auto-chat notification in tags so UI can filter it from chat display
v0.4.7
What's New
Model Updates
- Add GPT-5.4 and GPT-5.4-pro as default OpenAI models
- high: gpt-5.4-pro → gpt-5.4 → gpt-5.2-pro → gpt-5.2
- normal: gpt-5.4 → gpt-5.2-codex → gpt-5.2 → gpt-5
- low: gpt-5.3-chat-latest → gpt-5-mini → gpt-5-nano → gpt-4.1-mini
Misc
- Update copyright year to 2025-2026
v0.4.6
What's New
Kimi Coding Support
- Add User-Agent header for
kimi-for-codingmodel - Add Kimi (Moonshot) to documentation
Model Persistence Fix
- Fix
/modelcommand not persisting across restarts - Case-insensitive agent name matching for template write-back
- Fallback source_path lookup from template manager
Smart Template Sync
- Replace force-overwrite with hash-based sync that preserves user modifications
- User changes (e.g. model set via
/model) are no longer overwritten on startup
API Key Priority
- When
LLM_API_BASEis set (unified proxy mode),LLM_API_KEYnow takes priority over provider-specific keys
Key Management
pantheon setup: prefix withdto delete keys (e.g.d0,d1,d3)/keys rm <number|name>: remove provider keys or custom endpoint from REPL
Deprecation Fix
- Use
namespaceinstead of deprecatedprefixin MCP mount
v0.4.5
What's New
Universal API Proxy Support
LLM_API_BASE/LLM_API_KEY: New universal environment variables to route all LLM calls through a third-party proxy or custom OpenAI-compatible endpoint- When
LLM_API_BASEis active,LLM_API_KEYis automatically preferred over provider-specific keys - Provider-specific variables (e.g.
OPENAI_API_BASE) still take priority when set
Setup Wizard Enhancements
pantheon setup: New CLI command to launch the setup wizard at any time- Custom API Endpoint [0]: New option in the setup wizard to configure
LLM_API_BASEandLLM_API_KEYinteractively /keys 0 <base_url> <api_key>: Configure custom endpoint directly from the REPL
Other Changes
- Updated default fallback model to
openai/gpt-5.2 - Documentation updated with custom API endpoint instructions