4.3.4 Provision environments
Ansible is run by the "Provision environment" Github Action to install on your server all the software dependencies, configure scheduled tasks such as automated backup, install and configure a firewall. The pipeline will also secure your server by only allowing your defined system administrator users SSH access to the servers using two factor authentication.
Configure the Ansible inventory files for your environments
The first step is to configure the YAML inventory file that Ansible will use for your environment.
Look in the infrastructure/server-setup directory and you will see the following files:
development.yml
qa.yml
staging.yml
production.yml
backup.yml
Starting with the development.yml file ...
In the users block you can add SSH public keys for server super admins. These are individuals in your organisation who will be able to SSH into the environment for debugging purposes. Ansible will create these users and set up 2FA authentication for them when they connect.
You must also replace the server hostname and the IP address for the development server.
Next, observe the qa.yml file ...
Once again, add SSH public keys for super admins and amend the hostname and IP address as above.
There are some optional blocks here depending on what you need to do.
In our example setup, we are repurposing our QA server as the VPN & "jump" server.
You will see the option to add a "jump" user that doesnt require 2FA to connect, as this user will be used by the Github action to then SSH into downstream servers using their "provision" user. So you must paste in the public keys for each relevant server.
If you are using your own VPN and you have your own jump server in your network, you can delete this block.
You will also notice this block. This is used by some countries who wish to repurpose the QA server as a backup server in order to reduce costs. If you have a separate backup server, you can delete this block.
Next, observe the staging.yml file ...
Add SSH public keys for super admins and amend the hostname and IP address as above.
A staging server is used as a pre-production environment that is a mirror of production. Production backs up every night to the backup server, and the staging server restores the previous days backup onto it.
In this way we know that:
Production data is safely backed up and restorable
We have an environment we can deploy to for final Quality Assurance to ensure that no upgrade negatively affects production data. Indeed we can use staging to test that database migrations run successfully on real data before deploying to production.
Note these variables:
"only_allow_access_from_addresses" secures your server from SSH access only from a whitelist of servers such as Github's meta servers (IPs used by Github Actions), a static IP address for your super server admins, or a "jump" server IP.
"enable_backups" is set to false on staging. Staging will not backup data.
"periodic_restore_from_backup" is set to true on staging. Staging will only restore backup data from production.
Note the following variable:
"ansible_ssh_common_args" contains the same command as SSH_ARGS used in step 3.3.2.
It is required to be manually set here again because unfortunately Github doesnt allow us to configure Ansible core to allow variables to be passed to inventory files. You can remove these args if you are not using a "jump" server.
Note the backups block which is set on the staging server so that the staging server has access to the backup environment to retrieve daily backups.
Next, observe the production.yml file ...
The variables are similar to the staging environment. Notice that the "enable_backups" variable is set to true as this environment will backup every day.
Production servers can be deployed across a cluster of 2, 3 or 5 servers. If you are deploying to a cluster then you must add a docker worker for each additional server in your cluster:
Commit the inventories and run the provision action for each environment
Amend the inventory files as appropriate and commit the files to Git.
If you are going to use the QA server as a jump server, then you should provision the QA server first.
Set-up 2FA SSH access for all your super admins
Now that your servers are provisioned and A records exist for them, you can SSH in using either the IP address or the domain, plus your super admin username.
SSH'ing into a production or staging server that has been configured with a "jump" server, will require you to pass the arguments appropriate to your setup.
You will be asked to set up 2FA with Google Authenticator. You must have the Google Authenticator app on your mobile phone.
Scan the QR code then enter the 6-digit 2FA code to access the server.
For all the questions that are asked, accept defaults by typing "y"
You will also notice that root SSH access is now disabled as a security posture.
Now that all your servers are provisioned, you are ready to deploy OpenCRVS provided your country confguration Docker container image has pushed to Docker successfully.
Last updated