- Each node needs identical provisioning regardless of its resources.
- Provisioning requirements for a node are :
- Operating a standard Debian 12 installation with
systemdandaptavailable. - The cluster admin must exist and its name,
uidandgidmust be consistent across nodes. - The cluster admin must be a member of the
sudogroup and its SSH public key must exist in~/.ssh/authorized_keys. - The cluster admin SSH public key must be consistent across nodes.
sshdmust be configured to allow TCP port forwarding.sshdmust be configured to force public key authentication for all users./etc/hostsmust contain a mapping from127.0.1.1to the node'sHOST_NAME.
- Operating a standard Debian 12 installation with
- Here is an example of a Bakraid-compliant minimal provisioning script for a cloud-based Debian 12 node :
#!/bin/bash
# apart from HOST_NAME, the following values have to be consistent across all candidate nodes
HOST_NAME="kube-node-1"
USER_NAME="administrator"
USER_HOME="/home/administrator"
USER_PASS="password"
USER_SSHPUBKEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0wmN/Cr3JXqmLW7u+g9pTh+wyqDHpSQEIQczXkVx9q administrator@bakraid.xyz"
# script has to run as root
if [[ "$(id -u)" != "0" ]]; then
echo "Please run this script using sudo."
exit 1
fi
########################
# SETUP NETWORK #
########################
# set hostname
hostnamectl hostname "$HOST_NAME"
# set hosts file
cat << EOF > /etc/hosts
# /etc/hosts
127.0.0.1 localhost
127.0.1.1 $HOST_NAME
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
EOF
########################
# INSTALL PACKAGES #
########################
# update and upgrade
apt-get update && apt-get upgrade -y
# policy management
# system monitoring
# git
# network utilities
# encryption
# archive management
# shell utilities
# editors
# man pages
# full NTP support
# miscellaneous
MINIMAL="policykit-1 \
neofetch procps htop iftop psmisc time iotop sysstat \
git \
iproute2 nmap mtr wget curl \
pgpdump \
zip unzip \
bash-completion tree \
vim vim-common vim-runtime \
man-db \
ntpsec \
jq"
# install packages, allow correct shell expansion
apt-get install --no-install-recommends -m -y -q=1 $MINIMAL
########################
# CREATE USER #
########################
# create user (specify shell, disable login)
adduser "$USER_NAME" --shell "/bin/bash" --disabled-login
# setup user password
echo "$USER_NAME:$USER_PASS" | chpasswd
# add to sudo group
usermod -a -G sudo "$USER_NAME"
########################
# SETUP SSH #
########################
# add ssh public key
echo "$USER_SSHPUBKEY" > "$USER_HOME/.ssh/authorized_keys"
# setup ownership
chown -R "$USER_NAME:$USER_NAME" "$USER_HOME/.ssh"
# setup permissions
chmod 600 "$USER_HOME/.ssh/authorized_keys"
########################
# SETUP SSHD #
########################
# SSHD configuration overrides
cat << "EOF" > "/etc/ssh/sshd_config.d/sshd_overrides.conf"
# ssh daemon custom configuration (openssh-server 8.4 only supports SSH protocol 2)
# check users home dir and files ownership / permissions at login time
StrictModes yes
# ======== NETWORK ========
# tcp directives (all interfaces, port 22, ipv4 only)
AddressFamily inet
ListenAddress 0.0.0.0:22
# regularly check that connection is still up
TCPKeepAlive yes
# only use ip addresses (no hostnames) in ~/.ssh/authorized_keys
UseDNS no
# ======== TRAFFIC ========
# allow TCP port forwarding (forward local ports over ssh)
AllowTcpForwarding yes
# TCP port forwarding is available to localhost only
GatewayPorts no
# disable X11 forwarding
X11Forwarding no
# disable device forwarding
PermitTunnel no
# disable ssh-agent forwarding (unused)
AllowAgentForwarding no
# ==== AUTHENTICATION =====
# use PAM authentication (settings are in /etc/pam.d/sshd)
UsePAM yes
# use ed25519 host private key for server authentication
HostKey /etc/ssh/ssh_host_ed25519_key
HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-ed25519
# algorithms for key exchange, ciphers, message authentication code
KexAlgorithms curve25519-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com
MACs hmac-sha2-512-etm@openssh.com
# force public key authentication for everyone
AuthenticationMethods publickey
# enable public key authentication and restrict client key types to ed25519 public keys
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes ssh-ed25519-cert-v01@openssh.com,ssh-ed25519
# disable host-based authentication and ignore related files :
# ~/.ssh/known_hosts will only be used to store servers fingerprint
# /etc/ssh/known_hosts should not be used at all
HostbasedAuthentication no
IgnoreRhosts yes
IgnoreUserKnownHosts yes
HostbasedUsesNameFromPacketOnly no
# disable password authentication
PasswordAuthentication no
PermitEmptyPasswords no
# disable challenge-response authentication
KbdInteractiveAuthentication no
# ========= INFO ==========
# show last login
PrintLastLog yes
# ======= SESSION =========
# manage unauthenticated connections (see man page for details)
LoginGraceTime 30
MaxStartups 5:50:20
# disable session timeouts (change interval to timeout value in seconds if desired)
ClientAliveInterval 0
ClientAliveCountMax 0
# stick to default accepted client environment variables
AcceptEnv LANG LC_*
# ========= LOGS ==========
# log user's key fingerprint on login, use systemd journal auth facility
LogLevel VERBOSE
SyslogFacility AUTH
# ========= MISC ==========
# disable root login
PermitRootLogin no
EOF
# setup ownership
chown root:root /etc/ssh/sshd_config.d/sshd_overrides.conf
# restart ssh daemon
systemctl restart ssh
########################
# CONFIGURE SHELL #
########################
# remove nano as an editor alternative
update-alternatives --remove editor /bin/nano
# remove nano, period
apt-get purge -y nano
# set vim.basic as an editor alternative
[[ -x /usr/bin/vim.basic ]] && update-alternatives --set editor /usr/bin/vim.basic
# editor alternative should already be in auto mode, but anyway
update-alternatives --auto editor
# end message
echo -e "installation complete."- The above can serve as a template and be adapted according to your own requirements.
- The approach is that all internal cluster traffic must be isolated from the public internet.
- This is already the case if nodes are physical machines connected to a LAN without being assigned public IP addresses.
- For cloud-based clusters, most cloud vendors support
VLANbased network isolation for virtual machines (VMs) :- Such features provision VMs with a secondary NIC (usually
eth1) connected to aVLANsubnet. - Bakraid will then configure cluster nodes to route control plane and pod network traffic through
eth1IPV4.
- Such features provision VMs with a secondary NIC (usually
- Cluster admins must start SSH sessions to access the control plane :
- A dedicated
NodePortservice exposeskubernetes-dashboardreverse proxy on control plane nodes. - Inbound traffic to the
kubernetes-dashboardport must be blocked at the firewall level (see below). - Admins can tunnel the port to their local machine using
sshdTCP port forwarding. - Admins can inspect the locally proxied K8s API the same way.
- Admins can run CLI commands on nodes through standard SSH sessions.
- A dedicated
- Public access to workloads is possible through explicit, declarative use of
NodePortservices and external load balancers (see below).
Note : VPC can be an alternative to VLAN for cloud-based clusters if appropriate NAT rules are set up in the VPC gateway.
- The approach is to separate concerns between inbound traffic filtering rules and cluster traffic routing rules.
- For LAN-based clusters, specific
netfilterrules have to be provisioned for each node (see below). - For cloud-based clusters, most cloud vendors support firewalling features for VMs :
- Such features allow configuration of filtering rules based on IPV4 addresses / ranges, protocols and ports.
- This removes the burden of manual configuration of
netfilterrules during node provisioning. - Configured rules for a cloud-managed firewall are applied to all cluster nodes to block unwanted traffic.
- In a complementary manner,
kube-proxyinstalls routing rules required for cluster operation on each node.
-
It is recommended to block IPV6 traffic completely.
-
Example configuration if the private IPV4 subnetwork is
192.168.1.0/24:proto port source address rule description / component TCP 22 All IPV4 ACCEPT SSH traffic TCP 6443 192.168.1.0/24ACCEPT kube-apiserverTCP 2379-2380 192.168.1.0/24ACCEPT etcdTCP 10257 192.168.1.0/24ACCEPT kube-controller-managerTCP 10259 192.168.1.0/24ACCEPT kube-schedulerTCP 10250 192.168.1.0/24ACCEPT kubeletTCP 10256 192.168.1.0/24ACCEPT kube-proxyTCP 8472 192.168.1.0/24ACCEPT flannelVXLAN backendTCP 30001-30099 All IPV4 ACCEPT NodePortservicesTCP 80 All IPV4 ACCEPT Inbound HTTP TCP 443 All IPV4 ACCEPT Inbound HTTPS * * * DROP Drop all other traffic
Note : other ports / protocols may be allowed as well depending on workloads requirements.
- The approach is to have the cluster expose distributed workloads and set up load balancing between nodes elsewhere.
- For LAN-based clusters, this requires provisioning and configuration of a dedicated load balancer.
- For cloud-based clusters, most cloud vendors support load balancing features between VMs :
- Such features support flat TCP routing of incoming requests, thus allowing SSL termination at the workload level.
- They also offer configurable options for distribution algorithms, node healthchecks and session stickiness if needed.
- The cloud-managed load balancers must be configured to point to the
NodePortservices exposed on cluster nodes.