Error: creating Auto Scaling Group : ValidationError: You must use a valid fully-formed launch template. Value () for parameter groupId is invalid. The value cannot be empty

As a DevOps engineer, we do encounter such cryptic error messages while launching infrastructures. In the below post, we will try to understand one such error while creating an Auto Scaling Group in AWS cloud, with a possible workaround.

Error:
Error: creating Auto Scaling Group : ValidationError: You must use a valid fully-formed launch template. Value () for parameter groupId is invalid. The value cannot be empty
│ status code: 400, request id: aac314dd-082f-4634-82ca-8bd6

Backgroud

We are trying to create an Auto Scaling group with a launch template.

# Auto Scale Group
resource "aws_autoscaling_group" "osc_asg_application" {
  name                = "osc-asg-application"
  max_size            = 1
  min_size            = 1
  desired_capacity    = 1
  vpc_zone_identifier = [aws_subnet.osc_application.id]
  launch_template {
    id      = aws_launch_template.osc_application_launch_template.id
    version = "$Latest"
  }
  tag {
    key                 = "name"
    value               = "osc-asg-application"
    propagate_at_launch = true
  }
}
# Launch Template
resource "aws_launch_template" "osc_application_launch_template" {
  name                   = "osc-application-launch-template"
  image_id               = data.aws_ami.cloudfront_ami.image_id
  key_name               = "demokey"
  instance_type          = "t2.micro"
  security_group_names = [aws_security_group.osc_security_group.id]
  block_device_mappings {
    device_name = "/dev/sdf"
    ebs {
      volume_size = 8
      volume_type = "gp3"

    }
  }

  dynamic "tag_specifications" {
    for_each = var.tag_resource
    content {
      resource_type = tag_specifications.value
      tags = {
        Name = "osc-application-resource"
      }
    }
  }

  tags = {
    Name = "osc-application-launch-template"
  }
}

When we planned and applied the above terraform code, it failed with the following error code.


Error: creating Auto Scaling Group : ValidationError: You must use a valid fully-formed launch template. Value () for parameter groupId is invalid. The value cannot be empty
│ status code: 400, request id: aac314dd-082f-4634-82ca-8bd6b9fe69a6

Upon investigating the issue futher, we found the security group argument passed in launch template was wrong. We had passed the argument security_group_names instead of vpc_security_group_ids.

We made the changes in the launch template and updated the security group argument.

The updated working launch template code is as below,

resource "aws_launch_template" "osc_application_launch_template" {
  name                 = "osc-application-launch-template"
  image_id             = data.aws_ami.cloudfront_ami.image_id
  key_name             = "demokey"
  instance_type        = "t2.micro"
  vpc_security_group_ids = [aws_security_group.osc_security_group.id]
  block_device_mappings {
    device_name = "/dev/sdf"
    ebs {
      volume_size = 8
      volume_type = "gp3"

    }
  }

  dynamic "tag_specifications" {
    for_each = var.tag_resource
    content {
      resource_type = tag_specifications.value
      tags = {
        Name = "osc-application-resource"
      }
    }
  }

  tags = {
    Name = "osc-application-launch-template"
  }
}

Terraform: error configuring S3 Backend: no valid credential sources for S3 Backend found.

Terraform backend configuration for remote storage may be quite challenging if the correct parameters are not passed.

We can get multiple errors while executing the terraform init command, depending upon the configuration arguments we miss or based upon the permissions defined for our AWS profile or AWS User/Role.

Terraform Backend State Configuration

Terraform Backend Configuration Document

Here, we will focus on valid credential errors.

Error: Error configuring S3 Backend: no valid credential sources for S3 Backend found.

Terraform backend configuration code: The backend was defined as below, we specified the bucket, key, and region.

terraform {
  required_providers {
    aws = {
        source = "hashicorp/aws"
    }
  }
  backend "s3" {
    bucket = "terraform-backend"
    key = "misc-infra-terraform-state"
    region = "us-east-1"
  }
}

When we ran the terraform init command for the above configuration and got the below error,

terraform s3 backend error


E:\terraform>terraform init

Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│       For verbose messaging see aws.Config.CredentialsChainVerboseErrors

In our case, the fix was pretty easy, the above terraform backend block had missing AWS credentials, we just passed the credential details, and it worked like charm!

Updated S3 Backend configuration

terraform {
  required_providers {
    aws = {
        source = "hashicorp/aws"
    }
  }
  backend "s3" {
    bucket = "terraform-backend"
    key = "misc-infra-terraform-state"
    region = "us-east-1"
    profile = "myaws_profile"
  }
}

Terraform – GitHub Repository Creation

Git is one of the official providers supported by Terraform. We can manage GitHub through Terraform, like repository creation, GitHub Actions, etc.

The official terraform git document can be here

In our below example, we will try to create a basic private repository in our GitHub account.

Authentication Method

Terraform supports two types of Authentication for Github.

  • OAuth /Personal Access Token
  • GitHub App Installation

OAuth /Personal Access Token

This is the easiest method for authenticating Git using Terraform. We will need to create a token in our GitHub account and pass the Token value to our Terraform provider block.

Steps to Generate OAuth /Personal Access Token

  • Log in to your GitHub account at, https://github.com
  • Click on the Left Top side profile icon and on the navigation bar, click on setting
terraform GitHub repository setting
  • Now, on the right-hand side profile navigation panel, click on the Developer settings
terraform git repository developer setting
  • On the Developers setting Page, select Personal access tokens, then select Tokens (classic)
  • You can select Fine-grained Tokens, but during this document creation it was a beta version, so we will go with classic tokens.
  • Click on Generate new token and select Classic Tokens.
terraform GitHub repository classic token
  • On the New personal access token (classic) generate the page, and update the following.
    • Note – Your Token usage description.
    • Expiration – Set the date to a minimum, for security purposes.
    • Select Scopes – select the permission/role you want to grant to your token, do not select all roles. select only the minimum roles which are required.
terraform GitHub token creation page
  • Once you update the above details, click on Generate token
  • Copy the generated token, as we will need this token in our terraform code
terraform GitHub token example

Note: The token shown in the above example has been deleted and no longer exist

Terraform Code

Define the provider block using GitHub as the provider, in our example, we have defined in provider.tf file.

terraform {
  required_providers {
    github = {
      source  = "integrations/github"
      version = "~> 5.0"
    }
  }
}

provider "github" {
  token = var.token
}

Now define your variables in variable.tf file.

variable "token" {
  description = "Enter your git token"
  default     = "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxrfR1c"
}

Let us define the GitHub repositroy resource block in main.tf file. The output for our configuration will be the HTTP clone URL of our newly created repository

resource "github_repository" "my-terraform-repo" {
  name        = "my-terraform-repo"
  description = "Git repository created using Terraform"
  visibility  = "private"
}

# get the http clone url 
output "get_clone_url" {
  value = github_repository.my-terraform-repo.http_clone_url
}

We have completed the configuration, now let us init our terraform to download the git modules and plugins.

terraform init 
terraform GitHub init

The GitHub plugins/modules are downloaded, and now we will run the plan for an overview of our GitHub repository.

terraform plan -out gitrepo.tfplan
terraform GitHub plan

To create the GitHub repository, we will run the apply command with the above plan file.

terraform apply gitrepo.tfplan
terraform GitHub apply

The terraform has created our repository successfully, we can cross-verify them by validating them in our GitHub.

Terraform – Destroy Command

Terraform destroy command, is a way to destroy the infrastructure created using Terraform.

Often we need to destroy all or specific resources that we are managing using the terraform. terraform destroy is a way we can destroy the managed resource.

Syntax

terraform destroy <options> <resource_id>

To delete all managed resources, we can directly use the command, You will prompt for confirmation prior to destroying the managed resources.

Option 1:

terraform destroy

As we can see in the below output, terraform will list the resources to be destroyed and will prompt for confirmation.

Note: The confirmation should be case sensitive “yes“, any other word or even “Yes/YES” etc will result in cancellation of destruction.

terraform destroy command

Option 2:

To destroy specific resources from your managed resource use the flag –target along with the destroy command.

Syntax

terraform destroy --target <resource_id>

If you are not aware of your <resource_id>, there is an easy way to get the resource id. you will need to run the state list command. The state list command will list all the resources under your state file.

terraform state list

Once you have the resources listed, copy the resource id and use it in your destroy command as the target resource.

terraform destroy --target aws_vpc.siva_terraform_vpc[0]
terraform destroy command -target

If you do not want a user prompt/confirmation for resource deletion, you will need to pass the flag –auto-approve.

Syntax

terraform destroy --auto-approve
terraform destroy command –aoto-approve

As we can see all the resources will get terminated without any user confirmation.

Often we may want to delete all resources, except a few. In such a case, we can use the terraform state rm command to remove the resource from the state file and then run the terraform destroy command.

Syntax

terraform state rm <resource_id>
terraform state rm aws_vpc.siva_terraform_vpc[0]
terraform destroy command state rm

Now, if you run the terraform destroy command, all resources except the one we have removed from the state file will get terminated as terraform will not consider the resource as managed by terraform.

Read further,
Terraform Destroy

AWS EC2 – Unable to install/download packages from amazon repo to EC2 instance

We may have faced this issue on connecting to the Amazon repo to download/install packages, like Mysql, Apache, Nginx, etc.

We may get a connection time-out error when we run the sudo apt install mysql-server command. The error can occur for any package, in our example, we are installing a MySQL server.

Error

root@ip-10-9-9-58:/home/ubuntu# sudo apt install mysql-server

Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.207.133.243), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.87.19.168), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.165.17.230), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (3.87.126.146), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (3.209.10.109), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (18.232.150.247), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.201.250.36), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.237.137.22), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.73.36.184), connection timed out

The issue may occur in private or public instances. The most common solution is to first check the security groups assigned to your instance.

Steps

Login to your AWS console > EC2 > Security Groups,

Click on the Edit Outbound rules tab, then on Edit outbound rules.

Ensure we have HTTP TCP protocol for port 80 opened on outbound rules of the assigned security group.

If the rule for HTTP TCP Port 80 is missing, add the new rules in the similar format specified in the above image and save the changes.

Now, try to install the package, it should connect over the internet and install the package successfully.

Restricted Outbound Access

To solve the issue, above we have allowed all outbound traffic, in some cases due to security restrictions the organization may not allow you to open outbound traffic to all IP ranges.

The best practice says we should have minimum permissions.

To accomplish our security goals, we can restrict the outbound traffic to a certain Amazon repo mirrors IPs.

As we saw in the above error, Amazon tries to hit several of its mirrors to download and install the package. We need to copy any of these mirror IP addresses and use them to restrict outbound traffic in our security group.

root@ip-10-9-9-58:/home/ubuntu# sudo apt install mysql-server

Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out

In this example, we will open outbound only for IP 52.91.65.63.

Login to your AWS console > EC2 > Security Groups, select the assigned security group, and click on the Edit Outbound rules tab, then on Edit outbound rules.

Select the HTTP TCP port 80 rule and change 0.0.0.0/0 to 52.91.65.63/32. Save the changes, this will restrict the outbound rule for HTTP TCP port 80 to only one IP address, 52.91.65.63.

Note: We need to add a CIDR range even when we have one single IP address, Security Group does not allow us to add just a single IP address without a CIDR range. Even for a single IP address, we are required to add a CIDR block.

In our example for our single IP address, we have added a CIDR range /32.

You can change the CIDR block range based on your IP requirements.

AWS EC2 – Windows SSH – Permissions for public / SSH key are too open

We all may have encountered issues of bad permission for the public key while accessing the Linux/Ubuntu/Unix box through windows 10 systems.

This issue you may face while using a new set of public keys.

Error

E:\AWS\key>ssh -i ./my-key.pem ubuntu@10.0.0.1
The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:HxAA3hSzLSd/TRcZtXUYrjfZ0C9jL7fXmAZigM5p3+I.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.0.0.1' (ECDSA) to the list of known hosts.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions for './my-key.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "./my-key.pem": bad permissions
ubuntu@10.0.0.1: Permission denied (publickey).

The fix is pretty simple, we should just set the right permissions of the pem (public key) file. The only mistake we do while fixing the above issue is not granting permission to the correct user.

We need to first ensure we have the correct user details which we have used for our windows system login.

To verify the user details run the below command in your command prompt,

E:\AWS\key>whoami
desktop-4455tbos\myuser

Copy the user details, we will require these details in our later steps.

Steps to set the pem (public key) file permission

Browse and navigate to your public key directory.

Right-click on the key file name and click on properties.

Select the Security Tab and click on Advance.

On Advanced Security Setting Panel, click on “Disable inheritance

On the Block Inheritance Tab, Select “Remove all inherited permissions from the object

All Existing permission will be removed, ensure the permission Text Area has zero entries as shown below,

Now Click on the “Add” button, and you should get the pop-up to add permissions and user. Click on “Select Principal

On the “Select User or Group” panel, Enter the username we got earlier and click on “check names“.

You should be able to see your selected username. Once validated click on OK.

On Basic permission, select and check “Full control” and apply the changes.

You should be able to view your username with all permissions on the key property tab.

Now, let us connect to ssh,

ssh -i ./my-key.pem ubuntu@<your instance IP>
E:\keys>ssh -i ./my-key.pem ubuntu@10.0.0.1
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-1020-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

We should be able to connect to our instance.

EC2 – Instance user data fail – [WARNING]: Failed to run module scripts-user

Recently we faced an issue while executing user_data on my EC2 instance. This instance was created using Ubuntu 20.04 AMI. The user_data failed to execute the commands without any specific error message.

We tried different solutions available online like below,

  • Added / Removed #!/usr/bin/bash -x
  • Added / Removed #!/usr/bin/bash -v -e
  • Changed the interpreter of bash from #!/usr/bin/bash to #!/bin/bash

None of the above fixes worked for us, so we troubleshot it further by analyzing the user_data logs on the server.

The logs of user_data execution on an Ubuntu server can be found at, /var/log/cloud-init-output.log file.

The error recorded in the file /var/log/cloud-init-output.log was as below,

Cloud-init v. 22.2-0ubuntu1~22.04.3 running 'modules:final' at Fri, 30 Sep 2022 04:56:41 +0000. Up 24.86 seconds.
2022-09-30 04:56:41,635 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
2022-09-30 04:56:41,635 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py'>) failed
Cloud-init v. 22.2-0ubuntu1~22.04.3 finished at Fri, 30 Sep 2022 04:56:41 +0000. Datasource DataSourceEc2Local.  Up 25.26 seconds

Based on the errors recorded in the cloud-init-output.log file, we inspected the user_data recorded on the server. The user_data details on the host server can be located at,

  • The cloud-config file is located at /var/lib/cloud/instance/cloud-config.txt
  • The user data can be found at, /var/lib/cloud/instance/scripts/
  • On certain instances, you may see symlinks with the instance id at the above script location. The user data are stored in files name part_001 etc.

The inspection of user data captured on the host server showed spaces on the first line near the bash interpreter #!/bin/bash

As we can see in the below image, these spaces were the culprits in our case.

To fix the issue, we removed those spaces, re-published the user_data, and launched a new instance. The new instance executed the user_data without any failures.

The new user_data captured/recorded on the newly launched EC2 Instance does not have any spaces as we can see in the below image,

Dynamic Terraform State Configuration

Terraform doesn’t support passing dynamic variables for state files or backend blocks.

For example, let us consider we have a backend defined like the below,

Now, we have a scenario where we do not want to pass the values in the backend block in a static format, instead, we want the value to be dynamic.

We will rewrite the code and create two variables to pass the bucket name and bucket object (key) and pass these values in the backend block as variables instead of static values.

The above terraform configuration when we run the terraform init command will throw errors, as variables not allowed in the backend block.

E:\Projects\terraform>terraform init -migrate-state

Initializing the backend...
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.

╷
│ Error: Variables not allowed
│
│   on main.tf line 9, in terraform:
│    9:     bucket = var.backend_bucket #"s3-backend-demo"
│
│ Variables may not be used here.
╵

╷
│ Error: Variables not allowed
│
│   on main.tf line 10, in terraform:
│   10:     key = var.backend_key #"backend"
│
│ Variables may not be used here.
╵
Error while using dynamic variables for backend block

To resolve the issue, terraform allows us to pass the backend values dynamically through backend-config arguments via command prompt.

In your backend configuration block, remove the details which you want to pass dynamically. Like in our example we are using s3 as the backend, passing bucket name and object (key) values.

We will remove those attributes from the backend configuration block and instead we will pass them through backend-config arguments.

Passing backend configuration arguments values through a command prompt,

terraform init -backend-config="bucket=s3-backend-demo" -backend-config="key=backend"

In case we have an existing state file and we want to update the new backend arguments, you will need to run the init command with -migrate-state arguments.

terraform init -migrate-state -backend-config="bucket=s3-backend-demo" -backend-config="key=backend"

Terraform – Security Group

The Security Group acts as a virtual firewall, controlling the traffic flowing in and out of your EC2 resource associated with it.

VPS when created have default security groups, we can add additional security groups as per our requirements.

Security Groups operate at the instance level, whereas ACL acts over VPCs. Security Group supports allow rules only and are Stateful.

Security Groups consist of rules, which control traffic based upon protocols and port numbers, there are separate sets of rules for inbound and outbound traffic.

Security Groups allow you to restrict those rules to a certain IP CIDR range or you can allow traffic to all, 0.0.0.0/0 (IP4) and ::/0 (IP6)

We can create a Security Group through various methods, including AWS console as well as using IaaC (Infrastructure as Code).

Create Security Group through AWS Console

  • Login to your AWS Console, and navigate to VPC or EC2 service.
  • Click on the Security Group on the left navigation bar. All existing Security Groups will be listed.
  • Click on Create new Security Group, and enter the Security Group Name and Description.
  • Add the Protocol, Port, CIDR Range, etc., and click on save.

That’s it and your security group is created through GUI, now we will learn how we can use IaaC to create a Security Group in AWS using Terraform.

Create Security Group through Terraform (IaaC)

In our example, we will create a Security Group for the LAMP server and will allow traffic for ports 80 (HTTP), 443 (HTTPS), 22 (SSH), and 3306 (MySQL).

We will be creating a Security Group using different methods,

Method 1

In Method one let us go in the simplest way, we will have multiple blocks of ingress rules.

Files to be created

  • data.tf
  • variable.tf
  • provider.tf
  • securitygroup.tf

Provider.tf, Let’s update the Terraform provider information, so Terraform knows which cloud provider we are using. The provider files contain two blocks Terraform and Provider.

// Terraform block,define the provider source and version
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

// Define provider 
provider "aws" {
  shared_credentials_file = var.shared_credentials_files
  region                  = var.region
  profile                 = var.profile

  default_tags {
    tags = {
      "Resource" = "Security Group TF"
    }
  }
}

// local variables, can be referenced 
locals {
  common_tags = {
    "Project"   = "TF Modules"
    "Owner"     = "CubLeaf"
    "WorkSpace" = "${terraform.workspace}"
  }
}

Variables.tf, declare the variables used in the configuration. These variables are also named input variables.

variable "region" {
  description = "Enter the region"
  default     = "us-east-1"
}

// update the path of credential file 
// replace the below path with your credential file
variable "shared_credentials_files" {
  description = "Enter the credential file"
  default     = "C:\\Users\\username\\.aws\\credentials"
}

// Enter the profile of aws you want to use
variable "profile" {
  description = "Enter the profile name"
  default     = "myprofile"
}

variable "port" {
  description = "Enter the ports to be configured in SG Ingress rule"
  default     = [80, 22, 443, 3306]
}

variable "protocol" {
  description = "Ports for LAMP Server"
  type        = map(any)
  default = { "80" = "HTTP"
    "443" = "HTTPS"
    "22"  = "SSH"
  "3306" = "MYSQL" }
}

Data.tf, These are return values, in our case we want the default VPC id, to reference it in our security group resource, instead of hard coding we can use the data block to reference the existing resource. It’s very similar to traditional function return values.

// Get the default VPC ID
data "aws_vpc" "get_vpc_id" {
  default = true
}

Securitygroup.tf, The actual resource “aws_security_group” that we are going to create will be declared here,

resource "aws_security_group" "lamp_securitygroup_basic" {
  name        = "LAMP Security Group Basic"
  description = "LAMP Server Security Group Basic ${terraform.workspace}" // call the workspace 
  vpc_id      = data.aws_vpc.get_vpc_id.id  // calling from the data block
  tags        = local.common_tags

  // in-bound rules
  ingress {
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  ingress {
    from_port        = 443
    to_port          = 443
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  ingress {
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  ingress {
    from_port        = 3306
    to_port          = 3306
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  // outbound rules
  egress {

    from_port        = 0
    to_port          = 0
    protocol         = -1
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

Now, as we have the code/configuration ready, let us initiate and create the Security Groupby executing the terraform plan and applying the plan command.

Terraform – Dynamic Block

A Dynamic block is similar to for expression but creates nested blocks instead of a complex typed value. It iterates over a given complex value and generates a nested block for each element of that complex value. Unlike the count block, which will iterate over the resource, the dynamic will reside inside the resource block.

Dynamic blocks are supported inside the following blocks,

  • Resource
  • Data
  • Provider
  • Provisioner

Dynamic blocks can be used for resources like Setting, Security Group, etc.

Let us consider the example of creating a security group with multiple ingress rules for ports (80, 443, 3306, and 22). In an ideal scenario, we will end up copy-pasting the ingress rule for each port in the security group resource block.

Security Group (Without Dynamic Block)

// Security Group without Dynamic Block
resource "aws_security_group" "lamp_securitygroup_basic" {
  name        = "LAMP Security Group Basic"
  description = "LAMP Server Security Group Basic ${terraform.workspace}" // call the workspace 
  vpc_id      = data.aws_vpc.get_vpc_id.id
  tags        = local.common_tags
  // in-bound rules
  ingress {
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  ingress {
    from_port        = 443
    to_port          = 443
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  ingress {
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  ingress {
    from_port        = 3306
    to_port          = 3306
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  // outbound rules
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = -1
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

The above code will work perfectly fine, but as your security group rules go on increasing, you will need to copy-paste the ingress block for each rule, which will make the code hard to maintain and update.

To overcome this issue, Terraform provides us with a dynamic block. We can write the same code using dynamic block,

Security Group (Dynamic Block)

// Data block for VPC ID
data "aws_vpc" "get_vpc_id" {
  default = true
}
// variable declaration 
variable "port" {
  description = "Enter the ports to be configured in SG Ingress rule"
  default     = [80, 22, 443, 3306]
}
// Create Security Group uisng Dynamic Block
resource "aws_security_group" "lamp_sg" {
  name        = "LAMP Security Group"
  description = "LAMP Server Security Group ${terraform.workspace}" // call the workspace 
  vpc_id      = data.aws_vpc.get_vpc_id.id
  tags        = local.common_tags
  dynamic "ingress" {
    for_each = var.port
    content {
      description      = "Security rule for inbound ${ingress.value}"
      from_port        = ingress.value // The block should use block name to fetch the value and key
      to_port          = ingress.value
      protocol         = "tcp"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
  }
  // outbound rules
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = -1
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

In our example, we are producing a nested block of ingress rules,

  • The label of our dynamic block is “ingress”, you can change the label, as you per the requirement, like if you want to generate “setting” nested blocks we can name the dynamic block as “setting”.
  • For_each – Argument will provide the complex argument to iterate over.
  • Content Block – defines the body of each generated block. In our example it will be the ingress block, iterating over multiple ports.

In multi-level nested block, we can have multi-level nested dynamic blocks, ie the nested dynamic bock within the dynamic block.