This repository demonstrates the efficient use of Terraform functions to manage infrastructure as code without duplicating resources. The focus is on creating modular, scalable, and maintainable Terraform configurations.
Also demonstrates the usage of various Terraform functions such as lookup, count, and condition, along with implementing file provisioners (remote-exec, local-exec). The goal is to dynamically manage infrastructure using variables, conditional logic, and provisioning tasks.
In this project, we will utilize Terraform functions and techniques to create a cloud infrastructure with multiple instances and subnets efficiently. We aim to minimize duplication in our code by using various Terraform functionalities such as count, for_each, locals, and dynamic blocks.
- Clone the repository.
- Streamline Terraform configuration files by removing unnecessary variables and resources.
- Implement best practices for variable management and resource creation.
- main.tf: Main configuration file containing resource definitions.
- variables.tf: File for variable definitions.
- terraform.tfvars: File for variable values.
- locals.tf: File for local variables.
- subnet.tf: File dedicated to managing subnet resources.
- routing_table.tf: File for route table configurations.
- sg.tf: File for security group configurations.
ec2.tf: Main file to create EC2 instances.variables.tf: Define variables such as AMIs, instance type, keyname, and environment.terraform.tfvars: Assign values to variables such as AMI IDs for different regions and the environment.null.tf: Implementsnull_resourceto run scripts without recreating instances.userdata.sh: Script to install software on EC2 instances after they are created.
Start by cloning the repository to your local environment.
- Remove:
- Access Key and Secret Key
- AMI
- Internet Gateway (IGW)
- All CIDR and Subnet entries
- Keep:
- Availability Zones (AZs)
- Environment (ENV)
- Define Variables:
- Create a variable for
Public_cidr_blockto manage the creation of 6 subnets (3 private and 3 public). - Define
Private_cidr_block.
- Create a variable for
- Copy all relevant variables from
variables.tfand paste them intoterraform.tfvars. - Remove routing table configurations to let them inherit the VPC name.
- Remove Access Key and Secret Key entries.
- Paste remote backend configuration.
- Update VPC Tags: Instead of passing values for each tag, utilize
localsfor common tag values.
- Define local variables for common tag values.
- Access local variables in the VPC configuration using the appropriate syntax.
- Remove additional public subnets (subnet 2 and 3).
- Use
count = 3to create the necessary number of public subnets. - Utilize the
elementfunction to reference specific CIDR blocks based on the count index.
- Rename resources to reflect they are private.
- Adjust tags accordingly.
- Define separate route tables for public and private subnets.
- Comment Out route table associations temporarily.
- Use
terraform planto preview subnet configurations.
- Move all subnet resources to
subnet.tf. - Use
count.index + 1to manage subnet indexing dynamically.
- Move all route table blocks to this file.
- Address subnet ID issues by ensuring the correct variable references.
- Introduce Splat syntax for managing multiple subnet associations.
- Copy necessary configurations from
main.tfintosg.tf. - Add ports 443 and 22 to the security group.
- Implement dynamic ingress rules by creating a
service_portsvariable. - Populate this variable with values for multiple ports:
["80", "8080", "443", "8443", "22", "3306", "1433"].
- Run
terraform fmtto format the configuration files. - Execute
terraform planandterraform applyto validate and deploy the infrastructure. - Check inbound and outbound rules to ensure proper configuration.
The lookup function helps dynamically retrieve AMI IDs based on the region.
Example:
variable "amis" {
type = map(string)
}
# In terraform.tfvars
amis = {
us-east-1 = "ami-0abcd1234efgh5678"
us-east-2 = "ami-0wxyz1234mnop5678"
}
# In ec2.tf
ami = lookup(var.amis, var.aws_region)This setup allows us to deploy EC2 instances using region-specific AMIs. For example, AMIs in us-east-1 may not work in us-east-2.
We declare three subnets, and each subnet must map to one EC2 instance. By using count, we can define how many instances to create based on the length of subnets.
count = length(var.public_cidr_block)
subnet_id = element(var.subnets, count.index)Using a condition, we can decide how many instances to create based on the environment.
count = var.environment == "Prod" ? 3 : 1This means if the environment is Prod, 3 instances are created; otherwise, 1 instance is created.
We use provisioners to apply scripts after EC2 instances are created without recreating the instances.
- User Data: Initially, the user data script is passed during instance creation.
- Provisioners: To avoid recreating instances for every change, we use
null_resourceto run scripts or commands on existing instances.
Example:
resource "null_resource" "cluster" {
count = length(var.public_cidr_block)
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ec2-user"
private_key = file("path/to/key.pem")
host = aws_instance.example.public_ip
}
inline = [
"sudo bash /tmp/script.sh"
]
}
}If we need to recreate a resource, we can use Terraform's taint feature. Marking a resource as "tainted" forces Terraform to recreate it during the next apply.
Example:
terraform taint null_resource.clusterThis marks the resource as needing recreation, allowing the new script to be applied without affecting the rest of the infrastructure.
terraform init # Initialize Terraform
terraform fmt # Format the code
terraform validate # Validate the configuration
terraform apply # Apply the configurationterraform taint null_resource.cluster
terraform apply- Explore Terraform Modules for better structuring and reuse of code.
What is taint in Terraform?
Taint marks a resource for recreation. You can manually taint a resource using the terraform taint command, causing Terraform to destroy and recreate it during the next apply. Conversely, you can "untaint" a resource to prevent it from being recreated.
By following these steps and utilizing Terraform functions, we can efficiently manage our cloud infrastructure with minimal duplication and improved scalability. This project serves as a template for creating robust Terraform configurations.