Ansible – Inventory

Ansible Inventory is a collection of Nodes/hosts/servers/systems. Ansible inventories are used when we have to execute ansible commands/playbooks on multiple hosts or nodes.

We can either have individual hosts added to the inventory file or we can group these hosts like DB servers and web servers.

Grouping the Host/Nodes in inventory files allows us to run commands/playbooks on a specific set of servers.

Types of Inventory

  • Static Inventory
  • Dynamic Inventory

Static Inventory consists of a list of static hosts defined in inventory files, these are hardcoded and cannot be changed during run time.

Dynamic Inventory is used when we have a dynamic environment of Hosts/Node and we are not aware of the host IP address etc. An example of a Dynamic Environment is a cloud environment.

Dynamic Inventory is created using scripts in Python, and Shell Scripts.

Ansible has pre-built scripts for dynamic environments on certain cloud environments,

  • AWS EC2
  • GCP Compute Service
  • OpenStack
  • Jail, Spacewalk, etc.

Syntax

ansible -i <inventory_file> <group_name1> <group_name2> -m setup 

The default ansible inventory file is defined in ansible.cfg file.

Syntax, Dynamic inventory file

ansible -i ec2.py <group_name> -m ping 

Raw Module

Ansible allows running the Adhoc commands on the host/node which do not have python installed. We will need to use the raw module to execute such commands.

Syntax

ansible localhost -m raw -a ping
ansible localhost -m raw -a uptime

EC2 – SSH access – Permission denied (publickey)

Error ec2-user@10.0.0.10: Permission denied (publickey).


Issue : We recently got an issue where we were not able to access the EC2 instance through SSH and got permission denied error (publickey).

The permissions of the pem file was correct, i.e 600. Upon further investigating the issue, we found the issue was with ownership of /home/ec2-user/.ssh/authorized_keys file, by default the file ownership should be ec2-user:ec2-user or ubuntu:ubuntu based upon the OS you are using. In our case the ownership of the file was changed, which blocked our access to ssh on the ec2 instance.

Solution: There are multiple fixes for such issues,

  • Access the ec2-instance through SSM, session manager through amazon console and update the ownership.
  • Run the SSM command on the respective instance to update the ownership of impacted file.

In both above solutions, we need SSM agent to be installed on the impacted instance, in our case the impacted instance didn’t had SSM agent installed.
To fix the issue, we used the below approach as we cannot use SSM command or session manager on the impacted instance.

  • We took snapshot of the impacted instance volume.
  • Stop the instance and update the user_data of the impacted instance with below details,

Content-Type: multipart/mixed; boundary=”//”
MIME-Version: 1.0

–//

Content-Type: text/cloud-config; charset=”us-ascii”
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=”cloud-config.txt”

# cloud-config
cloud_final_modules:
– [scripts-user, always]

–//
Content-Type:
text/x-shellscript; charset=”us-ascii”
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=”userdata.txt”

#!/bin/bash

chown root:root /home
chmod 755 /home
chmod 600 /home/ec2-user/.ssh/authorized_keys
chown ec2-user:ec2-user /home/ec2-user -R

–//

  • The above content will update user_data as well the cloud-config file.
  • The cloud config file is located at /var/lib/cloud/instance/cloud-config.txt
  • We also need to ensure that, all exiting user data are deleted or cleaned, as the above config update may trigger to execute the existing as well the new user_data, thus corrupting your application.
  • The old user data can be located at, /var/lib/cloud/instance/scripts/
  • On certain instance you may see symlinks with the instance id at the above script location. The user data are stored in files name’s part_001 etc.
  • If you don’t have any user data, just start the instance again and the new user_data will kick-in and update the ownership of /home/ec2-user/.ssh/authorized_keys file and thus allowing us to access the instance through ssh.

Follow the below steps only when you have user data impacting your application.

  • To delete the user data, create one more instance, and attach the impacted instance volume to the new instance.
  • Note- the new instance and the snapshot volume should be in the same availability zone.
  • Access the new instance through SSH, mount the volume using below command,

lsblk (will list all the drives)
mkdir /mnt
mount /dev/xvdf /mnt

  • Access the existing user_data at /var/lib/cloud/instance/scripts/, delete or move it some other location.
  • Detach the instance and reattach it to the impacted instance and start the instance.
  • The new user_data to update the ownership will kick in and will update the ownership of /home/ec2-user/.ssh/authorized_keys file, thus allowing you to login to the impacted instance.

AMI Types (EBS vs Instance Store)

  • All AMIs are either categorized as Amazon EBS or backed by an instance store.
  • For EBS volume – The root device for an instance launched from the AMI is an Amazon EBS volume created from an amazon EBS snapshot.
  • For Instances Store Volume – The root device is an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3.
  • AMI can be selected based on
    • Region (Region and Availability zone)
    • Operating Systems
    • Architecture (32 and 64 bits)
    • Launch permission
    • Storage for root device (Root Device Volume)
      • Instance Store (Ephemeral Storage)
      • EBS backend volume
  • Operating System
    • AWS provides a variety of OS, which we can select and install as per our requirements.
  • Architecture Type
  • Root Device Type
  • EBS vs Instance Store
    • Instance store volumes are sometimes called Ephemeral storage.
    • Instance store volumes cannot be stopped You will lose the data on this instance if it is stopped.
    • The EBS-backed instances can be stopped, without data loss.
    • You can reboot both types of the Instance, without data loss.
    • Upon termination, both Root volumes will be deleted. But in EBS volumes, we can tell AWS to keep the root device volume.