4.3.3 Configure inventory files
Ansible is run by the "Provision environment" Github Action to install on your server all the software dependencies, configure scheduled tasks such as automated backup, install and configure a firewall.
The pipeline will also secure your server by only allowing your defined system administrator users SSH access to the servers using two factor authentication.
An automated script will create a Github environment to safely store application secrets and supply them to the provision action. Before running the script, you need to edit the Ansible inventory files and commit them to your repository.
Configure the Ansible inventory files for your environments
Look in the infrastructure/server-setup directory and you will see the following files:
development.yml
qa.yml
staging.yml
production.yml
backup.yml
Starting with the development.yml file ...
If you are just wanting to set up a single server for training purposes, the file you need to edit is the development.yml file
In the users block you can add SSH public keys for server super admins. These are individuals in your organisation who will be able to SSH into the environment for debugging purposes. Ansible will create these users and set up 2FA authentication for them when they connect.
You must also replace the server hostname and the IP address for the development server.
Next, observe the qa.yml file ...
The "qa" environment is a server used for quality assurance purposes.
Once again, add SSH public keys for super admins and amend the hostname and IP address as above.
Next, observe the backup.yml file ...
OpenCRVS is configured to save a back up of citizen data from production into a backup server every night, and restore the backup onto staging. This ensures that your citizen data is securely backed up in an encrypted file and is restorable to an environment that you can use for pre-production testing with real citizen data (staging).
Once again, add SSH public keys for super admins and amend the hostname and IP address for the backup server as above.
The following variable defines how many days of backup will be retained on the backup server. By default we set this to 7 days to optimise diskspace on the server.
The following variable allows you to customise the directory where backups will be stored.
Next, observe the staging.yml file ...
Add SSH public keys for super admins and amend the hostname and IP address for the staging server as above.
The following block should be edited with your backup server details so that the staging server can access the backup server programatically.
A staging server is used as a pre-production environment that is a mirror of production. Production backs up every night to the backup server, and the staging server restores the previous days backup onto it.
In this way we know that:
Production data is safely backed up and restorable
We have an environment we can deploy to for final Quality Assurance to ensure that no upgrade negatively affects production data. Indeed we can use staging to test that database migrations run successfully on real data before deploying to production.
Note these variables:
"only_allow_access_from_addresses" secures your server from SSH access only from a whitelist of servers such as Github's meta servers (IPs used by Github Actions), a static IP address for your super server admins, or a "jump" server IP.
"enable_backups" is set to false on staging. Staging will not backup data.
"periodic_restore_from_backup" is set to true on staging. Staging will restore backed up data from production.
Finally, observe the production.yml file ...
The variables are similar to the staging environment. Notice that the "enable_backups" variable is set to true as this environment will backup every day.
Production servers can be deployed across a cluster of 2, 3 or 5 servers. If you are deploying to a cluster then you must add a docker worker for each additional server in your cluster:
If your production cluster contains only one server, you can replace the docker-workers block like this:
Last updated