Proxmox VE Management Panel β Infrastructure at your fingertips
π Website: depl0y.mspreboot.com
β If Depl0y saves you time, give it a star β it helps others find the project!
Depl0y is a free, open-source web control panel for Proxmox VE. Manage VMs, clusters, nodes, storage, backups, networking, firewall, HA, and physical hardware from a single dark-mode interface.
A guided tour at depl0y.mspreboot.com covers more screens than fit here.
Multi-datacenter view with node health, cluster join/unjoin, server-model chips, BMC poll selector, and β‘ Power dropdown per host
Filter, search, bulk-select, and per-row actions β start/stop/migrate/clone/snapshot/delete
Per-VM lifecycle, config (CPU / RAM / disks / NICs), snapshots, firewall, console, and live charts
Guided VM creation with cloud-init credentials, storage selection, and ISO/cloud-image picker
Live OpenStreetMap showing all datacenters as pins β green online, red offline
Cluster-wide quorum, node membership, HA resources, and replication jobs
HA group + resource CRUD with quorum/state monitoring
PBS datastore browsing, backup schedule CRUD, and manual triggers
Storage pools across nodes with content browsing
Upload ISOs, configure cloud-image templates, manage on disk
Hardware health, power, temperature, wattage for all BMC-equipped servers β Dell iDRAC + HPE iLO via Redfish
Per-server CPU / DIMM / storage / firmware / network / SEL inventory + power actions
One-click LLM deployment β Ollama, llama.cpp, vLLM, LocalAI with optional GPU passthrough
Every user action and system change recorded with filtering
Role-based access β Admin, Operator, Viewer β with 2FA/TOTP
Cloud image setup, cluster SSH, HA enablement, integrations, and one-click system updates
Browse and test the depl0y REST API directly from the panel
- VMs β start/stop/reboot/suspend/resume, config editing (CPU, RAM, disks, NICs), snapshots, clone, migrate, firewall, VNC console, QEMU serial terminal
- LXC Containers β lifecycle, config editing, snapshots, terminal (xterm.js)
- Nodes β RRD metrics charts, VM + LXC list, storage browser, network config, task log, node terminal, OS-level shutdown / reboot via
pvesh - Cluster β status, node list, HA groups and resources, quorum monitoring
- Cluster Join / Unjoin β join any node to a cluster (fingerprint auto-fetched), remove nodes from cluster
- Replication β job CRUD, force-sync, log viewer
- Node Evacuation β migrate all VMs off a node to other online nodes
- Firewall β cluster, node, and VM-level rules; security groups; IPsets
- Backup β schedule CRUD, manual trigger, PBS datastore browsing
- Storage β pool management, content browsing, ISO and cloud image management
- Networking β bridge/bond/VLAN config with apply-pending support
- Offline-tolerant β host/node endpoints (
status,vms,lxc,tasks) return clean empty payloads when a node is powered off so the dashboard doesn't blow up
- Widget grid β drag-and-drop reordering with masonry (asymmetric column) layout
- 10+ widgets β CPU, RAM, storage, network traffic, disk I/O, VM status, alerts, activity feed, quick actions
- Clickable tiles β every widget links to its management view
- Per-widget refresh β each widget auto-refreshes independently
- Multi-host β add and manage multiple Proxmox VE hosts with API token or password auth
- Live Map β OpenStreetMap/Leaflet with datacenter pins (blue = online, red = offline)
- Federated Summary β aggregate VM/node/storage stats across all registered hosts
- Federated Dashboard β cross-datacenter VM/node overview in one view
- Redfish Dashboard β unified health, power, temperature, wattage for all BMC-equipped servers
- Two-section Power menu β surface a β‘ Power dropdown on every host card, node card, and on the iDRAC management page. Top section runs through the Proxmox OS (graceful shutdown / reboot via
pvesh), bottom section runs through iDRAC/BMC (Power On / Force Off / Graceful Off / Reset / Power Cycle / PXE) β the only path that can power on a fully-off machine - Server model auto-detection β Redfish
Modelβ DellSystemPIDβ DellSystemIDβ PCI subsystem lookup β manager generation tag, in that order. Resolves PowerEdge model names directly from the BMC even on 13G boxes where the standardModelfield is blank - Manual model override β pencil-edit on the host/node model chip; persists to
system_settingsand applies to the live cache instantly. Required for older (iDRAC 7) BMCs that don't expose model metadata at all - Configurable poll interval β 1 / 2 / 5 / 10 minutes, persisted globally; backend live-reschedules the job and queues an immediate poll on change
- Continuous post-poll refresh β Refresh All / Poll Now drives a 1.5 s cache re-fetch loop for ~20 s, so per-server model/health/power-state updates appear as soon as each individual BMC responds (rather than waiting for the slowest)
- Hardware Inventory β CPUs, DIMMs, storage controllers & drives, firmware, NICs, SEL
- Daily firmware-update check β Dell catalog XML parsed daily; BIOS / iDRAC available-version chips with direct support links
- Multi-vendor β Dell iDRAC (iDRAC 7 / 8 / 9 / 14G+) and HPE iLO via Redfish v1; SSH-based hardware reporting for hosts without Redfish
- Simple + Advanced modes β 4 questions to deploy, or full control over engine/model/GPU/OS/storage
- 4 Engines β Ollama, llama.cpp (GGUF), vLLM (OpenAI-compatible), LocalAI (Docker)
- 15+ Models β Llama 3.x, Mistral, Phi-4, Gemma, Qwen, DeepSeek, Code Llama, and more
- GPU Passthrough β NVIDIA (CUDA) and AMD (ROCm) with automatic driver install
- Add-ons β Open WebUI, ComfyUI (Stable Diffusion), AI auto-tuning, RAG, conversation logging
- File upload β OVA, OVF, VMDK, VHD, VHDX, QCOW2, RAW via drag & drop
- VMware direct β connect to ESXi or vCenter, browse and pull VMs over the network
- Auto-parse β OVF descriptors extracted for name, CPU, RAM, disk, OS type
- Disk conversion β VMDK/VHD/VHDX β qcow2 via qemu-img automatically
- 30-second deployments β Ubuntu, Debian, Rocky, AlmaLinux (after one-time template setup)
- Cloud-Init β hostname, user, SSH key, static IP, DNS, package injection
- Role-based β Admin, Operator, Viewer with route-level enforcement
- 2FA / TOTP β authenticator app support with QR code setup
- Encrypted storage β all passwords and API tokens encrypted at rest (Fernet)
- Audit log β every user action and system change recorded
- Rate limiting β 100 req/min globally with security headers
- One-click updates β check and install updates on any managed Linux VM via SSH
- Real-time streaming β live terminal output as apt/dnf runs
- Auto-scheduled checks β configurable interval (6hβ7d) for automatic checks
curl -fsSL http://deploy.agit8or.net/downloads/install.sh | sudo bashInstalls all dependencies, configures nginx, and creates a systemd service β ready in ~30 seconds.
- Open
http://your-server-ipβ default credentials:admin/admin(change immediately) - Enable 2FA β Settings β User Profile β Enable TOTP
- Add a Proxmox host β Proxmox Hosts β Add Datacenter β test connection
- Deploy or import a VM
cd /opt/depl0y && git pull origin main && sudo bash deploy.shOr: Settings β System Updates β Check for Updates β Install
SECRET_KEY=your_jwt_secret_key_minimum_32_chars
ENCRYPTION_KEY=your_fernet_encryption_key
DATABASE_URL=sqlite:////var/lib/depl0y/db/depl0y.db
DEBUG=false
LOG_LEVEL=INFO| Path | Contents |
|---|---|
/var/lib/depl0y/db/depl0y.db |
SQLite database |
/var/lib/depl0y/isos |
ISO images |
/var/lib/depl0y/cloud-images |
Cloud image templates |
/var/lib/depl0y/ssh_keys |
SSH key pairs |
/var/log/depl0y/ |
Application logs |
/tmp/depl0y-imports/ |
Temporary VM import working directory |
βββββββββββββββββββββββββββββββββββββββ
β Frontend (Vue.js 3) β
β SPA Β· Axios Β· Chart.js Β· Leaflet β
β Dark Mode Β· Widget Grid Β· xterm β
ββββββββββββββββ¬βββββββββββββββββββββββ
β HTTP REST (/api/v1)
ββββββββββββββββΌβββββββββββββββββββββββ
β Backend (FastAPI + Python) β
β Auth Β· VMs Β· Cluster Β· Import β
β LLM Β· HA Β· Backup Β· iDRAC Β· Alerts β
ββββββββ¬βββββββββββββββ¬ββββββββββββββββ
β β
ββββββββΌβββββββ ββββββΌβββββββββββββββββ
β SQLite DB β β Proxmox VE API β
β (users/VMs/ β β nodes/qemu/cluster β
β settings) β β + Redfish / SSH β
βββββββββββββββ βββββββββββββββββββββββ
Key dependencies: proxmoxer, pyVmomi, paramiko, SQLAlchemy, Pydantic, APScheduler, python-jose, Leaflet
- Swagger UI:
http://your-server/api/v1/docs - ReDoc:
http://your-server/api/v1/redoc - In-App API Explorer: Sidebar β API Explorer
# Backend
cd backend && python3 -m venv venv && source venv/bin/activate
pip install -r requirements.txt
uvicorn app.main:app --reload
# Frontend
cd frontend && npm install && npm run dev
# Production build + deploy
sudo bash /opt/depl0y/deploy.shCannot connect to Proxmox host
- Verify credentials and network connectivity
- Disable
verify_sslfor self-signed certificates - Ensure Proxmox API port (8006) is reachable
VM import fails
- Ensure
localstorage exists with enough space on the target node - SSH must be configured between Depl0y server and Proxmox host (Settings β SSH Setup)
Cluster join fails
- Ensure the root@pam password is correct for the cluster master node
- The fingerprint is auto-fetched β if it fails, fetch it manually:
pvecm statuson the master
Backend logs
sudo journalctl -u depl0y-backend -fFor more help: GitHub Issues
Pull requests welcome. For major changes, open an issue first to discuss what you'd like to change.
MIT β see LICENSE for details.
