Ansible – Ansible playbook dry run

The dry run is the best way to analyze the possible outcome of the configuration management code. Ansible playbook dry run command allows us to check the potential effects on our infrastructure/applications upon execution of the Ansible playbook.

Syntax

ansible-playbook pkg_install.yml --check
Ansible playbook dry run –check command

The ansible-playbook –check command will only provide us the information on changes the code will perform without actually executing the changes.

Ansible – Command to check playbook syntax

Ansible playbooks are one of the best available tools for system/server configuration management, but often we get YAML syntax errors that keep us away from writing the playbooks.

Ansible does provide us with a command to validate the syntax of our playbook for any errors prior to running the Ansible playbook.

Syntax

> ansible-playbook pkg_install.yml --syntax-check

Ansible syntax-check throwing error for a space error in the playbook file.

Ansible-playbook syntax-check error

Ansible syntax-check command output for syntactically correct playbook.

ansible-playbook syntax-check no error

AWS EC2 – Unable to install/download packages from amazon repo to EC2 instance

We may have faced this issue on connecting to the Amazon repo to download/install packages, like Mysql, Apache, Nginx, etc.

We may get a connection time-out error when we run the sudo apt install mysql-server command. The error can occur for any package, in our example, we are installing a MySQL server.

Error

root@ip-10-9-9-58:/home/ubuntu# sudo apt install mysql-server

Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.207.133.243), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.87.19.168), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.165.17.230), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (3.87.126.146), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (3.209.10.109), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (18.232.150.247), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.201.250.36), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.237.137.22), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.73.36.184), connection timed out

The issue may occur in private or public instances. The most common solution is to first check the security groups assigned to your instance.

Steps

Login to your AWS console > EC2 > Security Groups,

Click on the Edit Outbound rules tab, then on Edit outbound rules.

Ensure we have HTTP TCP protocol for port 80 opened on outbound rules of the assigned security group.

If the rule for HTTP TCP Port 80 is missing, add the new rules in the similar format specified in the above image and save the changes.

Now, try to install the package, it should connect over the internet and install the package successfully.

Restricted Outbound Access

To solve the issue, above we have allowed all outbound traffic, in some cases due to security restrictions the organization may not allow you to open outbound traffic to all IP ranges.

The best practice says we should have minimum permissions.

To accomplish our security goals, we can restrict the outbound traffic to a certain Amazon repo mirrors IPs.

As we saw in the above error, Amazon tries to hit several of its mirrors to download and install the package. We need to copy any of these mirror IP addresses and use them to restrict outbound traffic in our security group.

root@ip-10-9-9-58:/home/ubuntu# sudo apt install mysql-server

Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out

In this example, we will open outbound only for IP 52.91.65.63.

Login to your AWS console > EC2 > Security Groups, select the assigned security group, and click on the Edit Outbound rules tab, then on Edit outbound rules.

Select the HTTP TCP port 80 rule and change 0.0.0.0/0 to 52.91.65.63/32. Save the changes, this will restrict the outbound rule for HTTP TCP port 80 to only one IP address, 52.91.65.63.

Note: We need to add a CIDR range even when we have one single IP address, Security Group does not allow us to add just a single IP address without a CIDR range. Even for a single IP address, we are required to add a CIDR block.

In our example for our single IP address, we have added a CIDR range /32.

You can change the CIDR block range based on your IP requirements.

AWS EC2 – Windows SSH – Permissions for public / SSH key are too open

We all may have encountered issues of bad permission for the public key while accessing the Linux/Ubuntu/Unix box through windows 10 systems.

This issue you may face while using a new set of public keys.

Error

E:\AWS\key>ssh -i ./my-key.pem ubuntu@10.0.0.1
The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:HxAA3hSzLSd/TRcZtXUYrjfZ0C9jL7fXmAZigM5p3+I.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.0.0.1' (ECDSA) to the list of known hosts.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions for './my-key.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "./my-key.pem": bad permissions
ubuntu@10.0.0.1: Permission denied (publickey).

The fix is pretty simple, we should just set the right permissions of the pem (public key) file. The only mistake we do while fixing the above issue is not granting permission to the correct user.

We need to first ensure we have the correct user details which we have used for our windows system login.

To verify the user details run the below command in your command prompt,

E:\AWS\key>whoami
desktop-4455tbos\myuser

Copy the user details, we will require these details in our later steps.

Steps to set the pem (public key) file permission

Browse and navigate to your public key directory.

Right-click on the key file name and click on properties.

Select the Security Tab and click on Advance.

On Advanced Security Setting Panel, click on “Disable inheritance

On the Block Inheritance Tab, Select “Remove all inherited permissions from the object

All Existing permission will be removed, ensure the permission Text Area has zero entries as shown below,

Now Click on the “Add” button, and you should get the pop-up to add permissions and user. Click on “Select Principal

On the “Select User or Group” panel, Enter the username we got earlier and click on “check names“.

You should be able to see your selected username. Once validated click on OK.

On Basic permission, select and check “Full control” and apply the changes.

You should be able to view your username with all permissions on the key property tab.

Now, let us connect to ssh,

ssh -i ./my-key.pem ubuntu@<your instance IP>
E:\keys>ssh -i ./my-key.pem ubuntu@10.0.0.1
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-1020-aws x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

We should be able to connect to our instance.

EC2 – Instance user data fail – [WARNING]: Failed to run module scripts-user

Recently we faced an issue while executing user_data on my EC2 instance. This instance was created using Ubuntu 20.04 AMI. The user_data failed to execute the commands without any specific error message.

We tried different solutions available online like below,

  • Added / Removed #!/usr/bin/bash -x
  • Added / Removed #!/usr/bin/bash -v -e
  • Changed the interpreter of bash from #!/usr/bin/bash to #!/bin/bash

None of the above fixes worked for us, so we troubleshot it further by analyzing the user_data logs on the server.

The logs of user_data execution on an Ubuntu server can be found at, /var/log/cloud-init-output.log file.

The error recorded in the file /var/log/cloud-init-output.log was as below,

Cloud-init v. 22.2-0ubuntu1~22.04.3 running 'modules:final' at Fri, 30 Sep 2022 04:56:41 +0000. Up 24.86 seconds.
2022-09-30 04:56:41,635 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
2022-09-30 04:56:41,635 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py'>) failed
Cloud-init v. 22.2-0ubuntu1~22.04.3 finished at Fri, 30 Sep 2022 04:56:41 +0000. Datasource DataSourceEc2Local.  Up 25.26 seconds

Based on the errors recorded in the cloud-init-output.log file, we inspected the user_data recorded on the server. The user_data details on the host server can be located at,

  • The cloud-config file is located at /var/lib/cloud/instance/cloud-config.txt
  • The user data can be found at, /var/lib/cloud/instance/scripts/
  • On certain instances, you may see symlinks with the instance id at the above script location. The user data are stored in files name part_001 etc.

The inspection of user data captured on the host server showed spaces on the first line near the bash interpreter #!/bin/bash

As we can see in the below image, these spaces were the culprits in our case.

To fix the issue, we removed those spaces, re-published the user_data, and launched a new instance. The new instance executed the user_data without any failures.

The new user_data captured/recorded on the newly launched EC2 Instance does not have any spaces as we can see in the below image,

Ansible – Variables

Variables are containers for storing the values. Like any programming language, Ansible does support variables.

Ansible variable names should start with letters. Variable can contain letters, numbers, and underscores.

Types of Ansible variables

  • Default Variables
  • Inventory Variables
  • Facts and Local Facts
  • Registered Variables

Default Variables are variables defined by the ansible. Ansible provides the following default variables,

  • inventory_hostname
  • inventory_hostname_short
  • groups, and groups.key (keys of the group dictionary)

Note: Host variables have higher priority than group variables.

Ansible – Debug Module

Debug module is used to display the message. The debug module works only on the host nodes and not the remote ones.

Syntax, for debug module using msg attribute.

ansible localhost –m debug –a "msg='this is a test message'"

Syntax, for debug module using var attribute. We are using the ansible default variableinventory_hostname” in the below example.

ansible localhost -m debug -a "var=inventory_hostname"

Ansible allows us to join the variable and string in attribute “msg“. We will need to pass the variable name in double braces {{}}.

The same double braces {{}} will be used to display variables under the message “msg” attribute.

Syntax

ansible localhost -m debug -a "msg='you are using hostname '{{inventory_hostname}}"

Ansible – Inventory

Ansible Inventory is a collection of Nodes/hosts/servers/systems. Ansible inventories are used when we have to execute ansible commands/playbooks on multiple hosts or nodes.

We can either have individual hosts added to the inventory file or we can group these hosts like DB servers and web servers.

Grouping the Host/Nodes in inventory files allows us to run commands/playbooks on a specific set of servers.

Types of Inventory

  • Static Inventory
  • Dynamic Inventory

Static Inventory consists of a list of static hosts defined in inventory files, these are hardcoded and cannot be changed during run time.

Dynamic Inventory is used when we have a dynamic environment of Hosts/Node and we are not aware of the host IP address etc. An example of a Dynamic Environment is a cloud environment.

Dynamic Inventory is created using scripts in Python, and Shell Scripts.

Ansible has pre-built scripts for dynamic environments on certain cloud environments,

  • AWS EC2
  • GCP Compute Service
  • OpenStack
  • Jail, Spacewalk, etc.

Syntax

ansible -i <inventory_file> <group_name1> <group_name2> -m setup 

The default ansible inventory file is defined in ansible.cfg file.

Syntax, Dynamic inventory file

ansible -i ec2.py <group_name> -m ping 

Raw Module

Ansible allows running the Adhoc commands on the host/node which do not have python installed. We will need to use the raw module to execute such commands.

Syntax

ansible localhost -m raw -a ping
ansible localhost -m raw -a uptime

Ansible – Facts

Ansible Facts are used to fetch information about the managed nodes. The fact can provide a wide range of information about nodes/servers/systems etc.,

  • Processor details
  • Memory
  • Python version
  • OS release, distribution, etc.

Ansible’s setup module is used to gather these facts/information.

Facts are gathered in both Ansible Ad-Hoc commands and in the Ansible playbooks. In Ad-hoc commands, we need to explicitly run the setup module command, whereas in playbooks facts are by default gathered.

Syntax

ansible localhost -m setup

The setup module default will display all information about your node/system.

The facts information will be displayed in JSON/python dictionary format.

We may not be interested in such large Node/system information. We may be interested only in specific information about the node. Ansible does provide us the option to fetch only the information which is required from the facts.

We can retrieve specific required data from facts by passing the argument filter in your setup module ad-hoc command.

Syntax

ansible localhost -m setup -a "filter=ansible_python_version"
ansible localhost -m setup -a "filter=ansible_distribution"

Types of Ansible Facts

  • Default Facts, the default system information fetched using the setup module are called default facts. These facts do not provide details of any third-party application installed on the node, like MySQL, Splunk, etc.
  • Custom Facts are created to gather information about the third-party applications/services installed on your node/system, like Splunk, Mysql server, etc.

Custom Facts are required when we need information about third-party applications installed on the host or nodes. This information is like version details of third-party applications Splunk, Apache, MySQL server, etc.

Steps to create Custom facts

  1. Create your custom facts file at /etc/ansible/facts.d directory
  2. Create custom facts file with exention .fact under your /etc/ansible/fact.d directory.
  3. Custom facts files can be created using python, and shell script, but the files should be saved with .fact as a file extension.
  4. Set executable permission to the fact files.

Syntax to execute custom facts

ansible localhost -m setup -a "filter=ansible_local"

Ansible – Command vs Shell Module

The Shell Module as well as command modules are used to execute Binary Commands.

The command module is the default module to execute the binary commands, if we do not specify the module name in our ad-hoc command, ansible will by default use the command module.

The command module will execute the commands without proceeding to through the shell.

The command module will not recognize certain variables like $HOME

Stream operators like <,>, &, and | will not work with command modules.

The command module is more secured, compared to the shell module

For example, both shell and command module will have the same outputs,

ansible localhost -m shell -a "uptime"
ansible localhost -m command -a "uptime"

The stream operator, when passed through the shell and command module, will show different results, as the command module doesn’t recognize the stream operators and will throw errors, whereas the same will work fine with the shell module.

Command Module, will throw an error as it will not recognize > stream operator

ansible localhost -m command -a "uptime > uptime.txt"

Shell Module

ansible localhost -m shell -a "uptime > uptime.txt"