This repo is a reproducible “driver kit” for installing / bootstrapping Talos on Netcup Root Servers using talosctl, but running talosctl from a Docker container to keep tooling consistent.
It is especially useful when you are repeatedly reinstalling, wiping, and testing different Talos install media and want a predictable CLI environment.
- Your server must be booted into Talos maintenance mode (usually by booting a Talos ISO/DVD in the provider console).
- From the machine where you run
talosctl, you must be able to reach the node’s Talos API:- Talos API is on port 50000 (and clusters often need 50000/50001 reachable between nodes).
- You must know:
- Control plane public IP
- Worker public IPs (if any)
talosctl gen config creates a talosconfig file in the current directory, but talosctl often defaults to ~/.talos/config.
To avoid accidentally using the wrong client config, this README always uses either:
--talosconfig ./talosconfig, orexport TALOSCONFIG=/work/talosconfig(inside the container)
From this repo root:
docker build -t talosctl .docker run --rm -it \
--network host \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD":/work \
-w /work \
talosctlInside the container:
export CONTROL_PLANE_IP="168.63.128.455"
# Bash array of worker IPs (optional)
export WORKER_IPS=( \
"168.63.188.85" \
"168.63.188.86" \
)talosctl --talosconfig ./talosconfig version --insecure --nodes "$CONTROL_PLANE_IP"If this hangs or times out, it is almost always networking/firewall/port 50000 reachability.
Before generating configs or applying configs, verify the disk name Talos sees:
talosctl --talosconfig ./talosconfig get disks --insecure --nodes "$CONTROL_PLANE_IP"Pick the disk Talos reports (common examples):
- VirtIO:
/dev/vda - SATA/SCSI:
/dev/sda
Your current machine configs in this repo are set to install to /dev/vda.
If your server shows /dev/sda instead, you must update the install disk accordingly.
export DISK="/dev/vda"
# or: export DISK="/dev/sda"If you want to regenerate from scratch (this will create controlplane.yaml, worker.yaml, and talosconfig):
export CLUSTER_NAME="Lab"
talosctl gen config \
"$CLUSTER_NAME" \
"https://${CONTROL_PLANE_IP}:6443" \
--install-disk "$DISK"If you need to overwrite existing files:
talosctl gen config \
"$CLUSTER_NAME" \
"https://${CONTROL_PLANE_IP}:6443" \
--install-disk "$DISK" \
--forcetalosctl --talosconfig ./talosconfig apply-config \
--insecure \
--nodes "$CONTROL_PLANE_IP" \
--file controlplane.yamlfor ip in "${WORKER_IPS[@]}"; do
talosctl --talosconfig ./talosconfig apply-config \
--insecure \
--nodes "$ip" \
--file worker.yaml
doneAfter applying config, the nodes should reboot into the installed Talos system (remove/detach ISO media if needed).
Pick ONE control plane node (usually the first / only one) and bootstrap etcd:
talosctl --talosconfig ./talosconfig bootstrap \
--nodes "$CONTROL_PLANE_IP" \
--endpoints "$CONTROL_PLANE_IP"talosctl --talosconfig ./talosconfig kubeconfig \
--nodes "$CONTROL_PLANE_IP" \
--endpoints "$CONTROL_PLANE_IP" \
./kubeconfigThen:
export KUBECONFIG="$PWD/kubeconfig"
kubectl get nodes -A- If
apply-configsays the install disk does not exist, re-runget disksand ensureinstall.diskmatches what Talos reports. - If
talosctlseems to ignore the generatedtalosconfig, explicitly pass--talosconfig ./talosconfig. - If you cannot reach the node in maintenance mode, check routing/firewalls and that port 50000 is reachable.