4.3.3 Configure inventory files

Ansible is run by the "Provision environment" Github Action to install on your server all the software dependencies, configure scheduled tasks such as automated backup, install and configure a firewall.

The pipeline will also secure your server by only allowing your defined system administrator users SSH access to the servers using two factor authentication.

An automated script will create a Github environment to safely store application secrets and supply them to the provision action. Before running the script, you need to edit the Ansible inventory files and commit them to your repository.

Configure the Ansible inventory files for your environments

Look in the infrastructure/server-setup directory and you will see the following files:

development.yml

qa.yml

staging.yml

production.yml

backup.yml

Starting with the development.yml file ...

If you are just wanting to set up a single server for training purposes, the file you need to edit is the development.yml file

In the users block you can add SSH public keys for server super admins. These are individuals in your organisation who will be able to SSH into the environment for debugging purposes. Ansible will create these users and set up 2FA authentication for them when they connect.

You must also replace the server hostname and the IP address for the development server.

all:
  vars:
    users:
      # @todo this is where you define which development team members have access to the server.
      # If you need to remove access from someone, do not remove them from this list, but instead set their state: absent
      - name: <REPLACE WITH A USERNAME FOR YOUR SSH USER>
        ssh_keys:
          - <REPLACE WITH THE USER'S PUBLIC SSH KEY>
        state: present
        sudoer: true
    enable_backups: false
docker-manager-first:
  hosts:
    <REPLACE WITH THE SERVER HOSTNAME>: # @todo set this to be the hostname of your target server
      ansible_host: '<REPLACE WITH THE IP ADDRESS>' # @todo set this to be the IP address of your server
      data_label: data1 # for manager machines, this should always be "data1"

# Development servers are not configured to use workers.
docker-workers: {}

Next, observe the qa.yml file ...

The "qa" environment is a server used for quality assurance purposes.

Once again, add SSH public keys for super admins and amend the hostname and IP address as above.

Next, observe the backup.yml file ...

OpenCRVS is configured to save a back up of citizen data from production into a backup server every night, and restore the backup onto staging. This ensures that your citizen data is securely backed up in an encrypted file and is restorable to an environment that you can use for pre-production testing with real citizen data (staging).

Once again, add SSH public keys for super admins and amend the hostname and IP address for the backup server as above.

The following variable defines how many days of backup will be retained on the backup server. By default we set this to 7 days to optimise diskspace on the server.

amount_of_backups_to_keep

The following variable allows you to customise the directory where backups will be stored.

backup_server_remote_target_directory
all:
  vars:
    # @todo how many days to store backups for?
    amount_of_backups_to_keep: 7
    backup_server_remote_target_directory: /home/backup/backups
    users:
      # @todo this is where you define which development team members have access to the server.
      # If you need to remove access from someone, do not remove them from this list, but instead set their state: absent
      - name: <REPLACE WITH A USERNAME FOR YOUR SSH USER>
        ssh_keys:
          - <REPLACE WITH THE USER'S PUBLIC SSH KEY>
        state: present
        sudoer: true

backups-host:
  hosts:
    <REPLACE WITH THE SERVER HOSTNAME>: # @todo set this to be the hostname of your target server
      ansible_host: '<REPLACE WITH THE IP ADDRESS>'

Next, observe the staging.yml file ...

Add SSH public keys for super admins and amend the hostname and IP address for the staging server as above.

The following block should be edited with your backup server details so that the staging server can access the backup server programatically.

...
backups:
  hosts:
    <REPLACE WITH THE BACKUP SERVER HOSTNAME>: # @todo set this to be the hostname of your backup server
      ansible_host: '<REPLACE WITH THE BACKUP SERVER IP ADDRESS>' # set this to be the IP address of your backup server
      # Written by provision pipeline. Assumes "backup" environment
      # exists in Github environments
      ansible_ssh_private_key_file: /tmp/backup_ssh_private_key

A staging server is used as a pre-production environment that is a mirror of production. Production backs up every night to the backup server, and the staging server restores the previous days backup onto it.

In this way we know that:

  1. Production data is safely backed up and restorable

  2. We have an environment we can deploy to for final Quality Assurance to ensure that no upgrade negatively affects production data. Indeed we can use staging to test that database migrations run successfully on real data before deploying to production.

Note these variables:

"only_allow_access_from_addresses" secures your server from SSH access only from a whitelist of servers such as Github's meta servers (IPs used by Github Actions), a static IP address for your super server admins, or a "jump" server IP.

"enable_backups" is set to false on staging. Staging will not backup data.

"periodic_restore_from_backup" is set to true on staging. Staging will restore backed up data from production.

Finally, observe the production.yml file ...

The variables are similar to the staging environment. Notice that the "enable_backups" variable is set to true as this environment will backup every day.

enable_backups: true

Production servers can be deployed across a cluster of 2, 3 or 5 servers. If you are deploying to a cluster then you must add a docker worker for each additional server in your cluster:

docker-workers:
  hosts:
    <REPLACE WITH THE WORKER SERVER HOSTNAME>:
      ansible_host: 'REPLACE WITH THE WORKER SERVER IP'
      data_label: data2 (Note: this must be unique for every worker)
      ansible_ssh_common_args: ''

If your production cluster contains only one server, you can replace the docker-workers block like this:

# This production cluster is configured to only use one server
docker-workers: {}

Commit the inventory file changes to your Github repository before proceeding.

Last updated