This document explains how to use Two-Node Toolbox (TNT) to install Two-Node OpenShift clusters on external RHEL hosts that are not provisioned through the AWS hypervisor automation. This workflow is designed for environments like Beaker, lab systems, or any pre-existing RHEL 9 hosts.
The init-host.yml playbook provides the same host initialization functionality as the AWS hypervisor creation scripts, preparing your external RHEL host to run OpenShift two-node cluster deployments. It replaces the AWS-specific initialization steps with Ansible automation that works on any RHEL 9 system.
- Operating System: RHEL 9.x with minimal installation
- Hardware: 64GB+ RAM, 500GB+ storage (with sufficient space in
/home) - Network: Internet access for package downloads and registry access
- Access: SSH access with sudo privileges
- Ansible installed on your local machine
- SSH key pair for authentication
- Valid Red Hat subscription credentials (activation key recommended)
Copy the sample inventory file and configure it with your host details:
cd deploy/openshift-clusters
cp inventory.ini.sample inventory.iniEdit inventory.ini with your external host information:
[metal_machine]
root@your-host-ip ansible_ssh_extra_args='-o ServerAliveInterval=30 -o ServerAliveCountMax=120'
[metal_machine:vars]
ansible_become_password=""Important: Replace your-host-ip with the actual IP address or hostname of your RHEL system.
Tip: To skip the -i inventory.ini argument in all ansible commands, copy the inventory file to Ansible's default location (/etc/ansible/hosts on Linux, may vary on other operating systems).
You have several options for providing Red Hat subscription credentials:
export RHSM_ACTIVATION_KEY="your-activation-key"
export RHSM_ORG="your-organization-id"See hands-off deployment for more details on how to obtain these values
cp vars/init-host.yml vars/init-host.yml.local
# Edit vars/init-host.yml.local with your credentialsansible-playbook init-host.yml -i inventory.ini \
-e "rhsm_activation_key=your-key" \
-e "rhsm_org=your-org"Execute the initialization playbook:
# Using environment variables or local config file
ansible-playbook init-host.yml -i inventory.ini
# Or with command line parameters
ansible-playbook init-host.yml -i inventory.ini \
-e "rhsm_activation_key=your-key" \
-e "rhsm_org=your-org"The init-host.yml playbook performs the following tasks to replicate AWS hypervisor initialization:
- Sets system hostname to match your deployment environment
- Adds SSH host keys to prevent connection prompts
- Creates
pitadminuser with sudo access and random password
- Configures Red Hat Subscription Manager
- Registers system using activation key or interactive credentials
- Enables required repositories:
- RHEL 9 BaseOS and AppStream
- OpenShift Container Platform repositories
- Installs essential development tools:
git- Required for dev-scriptsmake- Essential for running dev-scripts Makefilesgolang- Required for Go-based toolingcockpit- Web-based system managementlvm2- Logical volume managementjq- JSON processing tool
- If you need to configures dev-scripts to use a different path (
/home/dev-scriptsinstead of/opt/dev-scripts, for example), add the following variable to your config_XXX.sh fileexport WORKING_DIR="/home/dev-scripts" - This might help you have sufficient disk space for OpenShift cluster deployment (80GB+ required)
After successful host initialization, your external RHEL system is ready for OpenShift cluster deployment. You can now proceed with the standard Two-Node Toolbox workflow:
Choose your preferred topology and run the setup playbook:
# Interactive mode
ansible-playbook setup.yml -i inventory.ini
# Non-interactive mode
ansible-playbook setup.yml -e "topology=arbiter" -e "interactive_mode=false" -i inventory.ini# Non-interactive mode
ansible-playbook setup.yml -e "topology=fencing" -e "interactive_mode=false" -i inventory.iniFor fencing topology using kcli:
ansible-playbook kcli-install.yml -i inventory.iniAfter successful cluster deployment (using either setup.yml or kcli-install.yml), the inventory file is automatically updated to include the cluster VMs. This allows you to run Ansible playbooks directly on the cluster nodes from your local machine.
The deployment automatically discovers running cluster VMs and adds them to the inventory with ProxyJump configuration through the hypervisor. For more details on using this feature, see: