⚠️ Experimental — This project is a work-in-progress. Individual CLI steps have been tested, but the full end-to-end workflow has not been verified as a single uninterrupted flow. Expect rough edges, missing error handling, and steps that may need manual intervention. Contributions and bug reports welcome.
Deploy MikroTik RouterOS CHR on Oracle Cloud and AWS — with secure serial console provisioning.
Handles the full lifecycle: download → convert → upload → import → network → launch → configure → connect. RouterOS is provisioned securely via serial console before any ports are opened to the internet.
Supported providers:
- Oracle Cloud (OCI) — Always Free ARM64 (A1.Flex) and x86 (E2.1.Micro)
- AWS EC2 — x86 (t3.micro free tier eligible)
Note on ARM64: OCI ARM64 works. AWS ARM64 (Graviton) boots on t4g.small (2GB+) but has no network — the ARM64 CHR kernel lacks the ENA driver required by AWS Nitro networking. See AWS.md for the full investigation.
Tools (install once):
# Bun runtime (required)
curl -fsSL https://bun.sh/install | bash
# QEMU tools (for OCI image conversion — not needed for AWS)
brew install qemu # macOS
# apt install qemu-utils # Linux
# Oracle Cloud CLI (for account setup only)
brew install oci-cli # macOSOracle Cloud account — sign up for free. See OCI.md for details.
Region matters for ARM64. Regions with 3 availability domains have better capacity:
us-ashburn-1,us-phoenix-1,eu-frankfurt-1. See OCI.md.
oci setup config
# Accept defaults, enter your User/Tenancy OCIDs and region
# Upload the public key to OCI Console → Profile → API Keysbunx chr-armed convert stable --arch=arm64
bunx chr-armed upload stable --arch=arm64
bunx chr-armed import stable --arch=arm64
bunx chr-armed setup-network
CHR_IMAGE_ID=ocid1.image... bunx chr-armed launch stable --arch=arm64
CHR_INSTANCE_ID=ocid1.instance... bunx chr-armed provision --ports=ssh,webfig
CHR_INSTANCE_ID=ocid1.instance... bunx chr-armed ssh# Bun runtime
curl -fsSL https://bun.sh/install | bash
# AWS CLI (for credential setup only)
brew install awscli # macOSAWS account with IAM setup done by root — see AWS.md:
- Create
vmimportservice role (required by AWS image import) - Create IAM user with:
AmazonEC2FullAccess+AmazonS3FullAccess+ custom inline policy forec2-instance-connect(serial console)
aws configure
# Enter your Access Key ID, Secret Access Key, region (us-east-1), output (json)bunx chr-armed setup-aws --provider=aws
# Creates vmimport IAM role + enables serial consolebunx chr-armed convert stable --arch=x86 --provider=aws
bunx chr-armed upload stable --arch=x86 --provider=aws
bunx chr-armed import stable --arch=x86 --provider=aws
bunx chr-armed setup-network --provider=aws
CHR_AMI_ID=ami-xxx bunx chr-armed launch stable --arch=x86 --provider=aws
CHR_INSTANCE_ID=i-xxx bunx chr-armed provision --ports=ssh,webfig --provider=aws
CHR_INSTANCE_ID=i-xxx bunx chr-armed ssh --provider=awsNo qemu-img needed! AWS accepts raw disk images directly.
- Firewall locked — all public ingress blocked (security group or security list)
- Serial console connects — SSH tunnel via cloud console service (no network needed)
- First-boot flow — accepts license, sets admin password
- Configuration applied — creates user, adds SSH key, disables default admin
- Ports opened — only after RouterOS is hardened
Your instance is never exposed to the internet with default credentials.
Run bunx chr-armed help for the full reference.
| Command | Description |
|---|---|
help |
Show command reference |
docs [oci|aws] |
View setup guide in terminal |
version <channel> |
Show latest CHR version |
convert <channel> |
Download and convert CHR image |
upload <channel> |
Upload image to cloud storage |
import <channel> |
Import image into cloud compute |
setup-network |
Create network resources |
setup-aws |
One-time AWS setup (vmimport + serial console) |
teardown-network |
Delete network resources |
launch <channel> |
Launch a CHR instance |
provision |
Configure RouterOS via serial console |
open-ports |
Open firewall ports |
close-ports |
Lock down all ports |
terminate [id] |
Terminate an instance |
ssh [id] |
SSH into an instance |
Options: --provider=oci|aws, --arch=arm64|x86, --ports=ssh,webfig,api, --region=<region>
Channels: stable, long-term, testing, development
- OCI path — Zero external dependencies. Uses OCI REST API directly with custom RSA-SHA256 request signing.
- AWS path — Uses AWS SDK v3 (
@aws-sdk/client-ec2,client-s3,client-ec2-instance-connect). - Bun runtime — Required for
Bun.spawn,Bun.sleep, and subprocess management. Not compatible with Node.js. - Serial console provisioning — Configures RouterOS over SSH-tunneled serial console, not cloud-init.
See DESIGN.md for architectural decisions, OCI.md for OCI setup, and AWS.md for AWS setup.
"Out of host capacity" (OCI) — ARM64 A1.Flex is oversubscribed on free tier. Try off-peak hours or a 3-AD region.
"NotAuthenticated" / 401 (OCI) — Check ~/.oci/config key_file path and upload public key to OCI Console.
"CredentialsProviderError" (AWS) — Run aws configure to set up ~/.aws/credentials.
vmimport role missing (AWS) — This is a one-time root/admin task. See AWS.md for the exact steps.
Serial console hangs — OCI requires ssh-rsa algorithm support; AWS ephemeral keys expire in 60s.
What's been tested (individually, not as a single run):
| Step | OCI ARM64 | OCI x86 | AWS x86 |
|---|---|---|---|
| Download + convert | ✅ | ✅ | ✅ (no convert needed) |
| Upload | ✅ | ✅ | ✅ |
| Import → image | ✅ | ✅ | ✅ |
| Launch instance | ✅ | ✅ | ✅ |
| Serial console provision | ✅ | ✅ | ✅ |
| SSH key auth | ✅ | ✅ | ✅ |
Known issues:
- AWS ARM64 (Graviton) boots on t4g.small (2GB+) but has no network — missing ENA driver in ARM64 CHR kernel
- AWS ARM64 on t4g.micro (1GB) shuts down immediately — not enough RAM for first-boot extraction
- OCI ARM64 capacity is limited on free tier — may get "Out of host capacity"
- The full workflow has not been tested as a single
step 1 → step Nrun - Requires Bun runtime (not Node.js compatible due to
Bun.spawn/Bun.sleep)
MIT