Andoni Rodríguez


Articulos

Ansible Test Playbook


Useful Playbook to test Amazon EC2 instances

In my work, I need to turn on and turn off many EC2 on AWS (Amazon Web Services)

Many of these times, I have to turn on an EC2, I don't even know how it's going to be configured or I just want to do tests on it.

Obviously going to the AWS panel every time and repeating exactly the same steps is not a viable option.

We all know Ansible, if not, with this definition taken from GitHub is enough:

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications — automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.

Each time I have to turn on a basic EC2, I run the following playbook:

$ ansible-playbook -i ec2.py create_test_ec2.yml -vv

With the option "-i ec2.py" I am telling ansible that I want to run a special hosts file, a script so that ansible can find how to connect to the AW2 EC2s by tags. Dynamic Inventory

The create_test_ec2.yml file, is our playbook, a YAML file that is composed of more or less by simple definitions of what we want that is our EC2 once raised.

The option "-vv" is to have more verbosity

   
# Test EC2
---
- name: Create Instance
  hosts: localhost
  connection: local
  gather_facts: False

  tasks:
    - name: Include the variables for Test EC2
      include_vars: group_vars/test

    - name: Launch EC2 instance for Test EC2
      ec2:
        assign_public_ip: yes
        image: "{{ ec2_image }}"
        instance_tags:
           tag_id: "{{ ec2_tag }}"
           Name: "{{ ec2_name }}"
        exact_count: "{{ ec2_exact_count }}"
        count_tag: "{{ ec2_name }}"
        instance_type: "{{ ec2_instance_type }}"
        key_name: "{{ ec2_key_name }}"
        region: "{{ ec2_region }}"
        vpc_subnet_id: "{{ ec2_vpc_subnet_id}}"
        group_id: "{{ ec2_group_id }}"
        wait: yes
        monitoring: yes
      register: ec2

    - name: Add new instance to host group
      sudo: yes
      add_host: hostname={{ item.private_ip }} groupname=test_ec2
      with_items: ec2.instances
      register: add_host

    - name: Associating Elastic IP with the new intance
      with_items: ec2.instances
      ec2_eip: instance_id={{ item.id }} ip={{ ec2_elastic_ip }} region={{ ec2_region }}

    - name: Wait for SSH to come up
      wait_for: host={{ item.private_ip }} port=22 delay=10 timeout=60 state=started
      with_items: ec2.instances

- name: Playbook for install base system
  gather_facts: False
  hosts: test_ec2

  roles:
    # Basic Roles
    - time_zone_ubuntu
    - common_ubuntu
    - pip_ubuntu

Let me explain what each part of the file does:

- name: Create Instance
  hosts: localhost
  connection: local
  gather_facts: False

I define how the Playbook is called, the hosts it connects to, in this case is localhost, because the connection is done with the Dynamic Inventory file. I define the gather_facts option as False because I'm not interested in collecting local variables from the system Ansible connects to. Simple.

  tasks:
    - name: Include the variables for Test EC2
      include_vars: group_vars/test

    - name: Launch EC2 instance for Test EC2
      ec2:
        assign_public_ip: yes
        image: "{{ ec2_image }}"
        instance_tags:
           tag_id: "{{ ec2_tag }}"
           Name: "{{ ec2_name }}"
        exact_count: "{{ ec2_exact_count }}"
        count_tag: "{{ ec2_name }}"
        instance_type: "{{ ec2_instance_type }}"
        key_name: "{{ ec2_key_name }}"
        region: "{{ ec2_region }}"
        vpc_subnet_id: "{{ ec2_vpc_subnet_id}}"
        group_id: "{{ ec2_group_id }}"
        wait: yes
        monitoring: yes
      register: ec2

    - name: Add new instance to host group
      sudo: yes
      add_host: hostname={{ item.private_ip }} groupname=test_ec2
      with_items: ec2.instances
      register: add_host

    - name: Associating Elastic IP with the new intance
      with_items: ec2.instances
      ec2_eip: instance_id={{ item.id }} ip={{ ec2_elastic_ip }} region={{ ec2_region }}

    - name: Wait for SSH to come up
      wait_for: host={{ item.private_ip }} port=22 delay=10 timeout=60 state=started
      with_items: ec2.instances

I define the tasks to inject variables to the playbook from an external file and use the Ansible ec2 Module to turn on EC2, then, I collect the private IP for later use and I wait until the SSH service is ready and finally, I assign a public IP.

- name: Playbook for install base system
  gather_facts: False
  hosts: test_ec2

  roles:
    # Basic Roles
    - time_zone_ubuntu
    - common_ubuntu
    - pip_ubuntu

Finally I connect to the EC2 That I started and I execute the roles that I consider basic to be able to start to do whatever tests I think are relevant. These put the instance in UTC and they install the packages that I always use (vim, htop, git, etc ...)

To see what each role does, as well as the structure of the project, you can see the code in GitHub.

Once the tests are finished, we can execute the following playbook that deletes the created instance:

$ ansible-playbook -i ec2.py delete_test_ec2.yml -vv

And this is it, I have omitted from the post things that I consider basic; like: to run the playbooks against AWS, you need to have the credentials set up, the ansible roles_path; among other details.

Regards.