OpenCRVS
v1.4
v1.4
  • 👋Welcome!
  • CRVS Systems
    • Understanding CRVS
    • Effective digital CRVS systems
    • OpenCRVS within a government systems architecture
    • OpenCRVS Value Proposition
  • Product Specifications
    • Functional Architecture
    • Workflow management
    • Status Flow Diagram
    • Users
      • Examples
    • Core functions
      • 1. Notify event
      • 2. Declare event
      • 3. Validate event
      • 4. Register event
      • 5. Print certificate
      • 6. Issue certificate
      • 7. Search for a record
      • 8. View record
      • 9. Correct record
      • 10. Verify record
      • 11. Archive record
      • 12. Vital statistics export
    • Support functions
      • 13. Login
      • 14. Audit
      • 15. Deduplication
      • 16. Performance management
      • 17. Payment
      • 18. Learning
      • 19. User support
      • 20. User onboarding
    • Admin functions
      • 21. User management
      • 22. Comms management
      • 23. Content management
      • 24. Config management
    • Data functions
      • 25. Legacy data import
      • 26. Legacy paper import
  • Technology
    • Architecture
      • Performance tests
    • Standards
      • FHIR Documents
        • Event Composition
        • Person
        • Registration Task
        • Event Observations
        • Locations
    • Security
    • Interoperability
      • Create a client
      • Authenticate a client
      • Event Notification clients
      • Record Search clients
      • Webhook clients
      • National ID client
      • FHIR Location REST API
      • Other ways to interoperate
  • Default configuration
    • Intro to Farajaland
    • Civil registration in Farajaland
    • OpenCRVS configuration in Farajaland
      • Application settings
      • User / role mapping
      • Declaration forms
      • Certificate templates
    • Business process flows in Farajaland
  • Setup
    • 1. Planning an OpenCRVS Implementation
    • 2. Establish project and team
    • 3. Gather requirements
      • 3.1 Mapping business processes
      • 3.2 Mapping offices and user types
      • 3.3 Define your application settings
      • 3.4 Designing event declaration forms
      • 3.5 Designing a certificate template
    • 4. Installation
      • 4.1 Set-up a local development environment
        • 4.1.1 Install the required dependencies
        • 4.1.2 Install OpenCRVS locally
        • 4.1.3 Starting and stopping OpenCRVS
        • 4.1.4 Log in to OpenCRVS locally
        • 4.1.5 Tooling
          • 4.1.5.1 WSL support
      • 4.2 Set-up your own, local, country configuration
        • 4.2.1 Fork your own country configuration repository
        • 4.2.2 Set up administrative address divisions
          • 4.2.2.1 Prepare source file for administrative structure
          • 4.2.2.2 Prepare source file for statistics
        • 4.2.3 Set up CR offices and Health facilities
          • 4.2.3.1 Prepare source file for CRVS Office facilities
          • 4.2.3.2 Prepare source file for health facilities
        • 4.2.4 Set up employees & roles for testing or production
          • 4.2.3.1 Prepare source file for employees
          • 4.2.3.2 Configure role titles
        • 4.2.5 Set up application settings
          • 4.2.5.1 Managing language content
            • 4.2.5.1.1 Informant and staff notifications
          • 4.2.5.2 Configuring Metabase Dashboards
        • 4.2.6 Configure certificate templates
        • 4.2.7 Configure declaration forms
          • 4.2.7.1 Configuring an event form
        • 4.2.8 Seeding & clearing your local databases
        • 4.2.9 Countryconfig API endpoints explained
      • 4.3 Set-up a server-hosted environment
        • 4.3.1 Verify servers & create a "provision" user
        • 4.3.2 HTTPS & Networking
        • 4.3.3 Create a Github environment
          • 4.3.3.1 Environment secrets and variables explained
        • 4.3.4 Provision environments
          • 4.3.4.1 Building, pushing & releasing your countryconfig code
        • 4.3.5 Deploy
    • 5. Functional configuration
      • 5.1 Configure application settings
      • 5.2 Configure registration periods and fees
      • 5.3 Managing system users
    • 6. Quality assurance testing
    • 7. Go-live
      • 7.1 Pre-Deployment Checklist
    • 8. Operational Support
    • 9. Monitoring
      • 9.1 Application logs
      • 9.2 Infrastructure health
      • 9.3 Routine monitoring checklist
      • 9.4 Setting up alerts
      • 9.5 Managing a Docker Swarm
  • General
    • Community
    • Contributing
    • Releases
      • v1.4.1: Release notes
      • v1.4.0 to v1.4.1 Migration notes
      • v1.4.0 Release notes
      • v1.3.* to v1.4.* Migration notes
      • v1.3.5: Release notes
      • v1.3.4: Release notes
      • v1.3.3: Release notes
      • v1.3.1: Release notes
      • v1.3.* to v1.3.* Migration notes
      • v1.3.0: Release notes
      • v1.2.* to v1.3.* Migration notes
        • v1.2 to v1.3: Form migration
      • v1.2.1: Release notes
      • Patch: Elasticsearch 7.10.2
      • v1.2.0: Release notes
      • v1.1.* to v1.2.* Migration notes
      • v.1.1.2: Release notes
      • v.1.1.1: Release notes
      • v1.1.0: Release notes
    • Interoperability roadmap
    • Product roadmap
Powered by GitBook
On this page
  • Configure the Ansible inventory files for your environments
  • Starting with the development.yml file ...
  • Next, observe the qa.yml file ...
  • Next, observe the staging.yml file ...
  • Next, observe the production.yml file ...
  • Commit the inventories and run the provision action for each environment
  • Set-up 2FA SSH access for all your super admins
  1. Setup
  2. 4. Installation
  3. 4.3 Set-up a server-hosted environment

4.3.4 Provision environments

Previous4.3.3.1 Environment secrets and variables explainedNext4.3.4.1 Building, pushing & releasing your countryconfig code

Last updated 1 year ago

is run by the "Provision environment" Github Action to install on your server all the software dependencies, configure scheduled tasks such as automated backup, install and configure a firewall. The pipeline will also secure your server by only allowing your defined system administrator users SSH access to the servers using two factor authentication.

Configure the Ansible inventory files for your environments

The first step is to configure the that Ansible will use for your environment.

Look in the infrastructure/server-setup directory and you will see the following files:

development.yml

qa.yml

staging.yml

production.yml

backup.yml

Starting with the development.yml file ...

In the users block you can add SSH public keys for server super admins. These are individuals in your organisation who will be able to SSH into the environment for debugging purposes. Ansible will create these users and set up 2FA authentication for them when they connect.

You must also replace the server hostname and the IP address for the development server.

all:
  vars:
    users:
      # If you need to remove access from someone, do not remove them from this list, but instead set their state: absent
      - name: <REPLACE WITH THE SSH USERNAME>
        ssh_keys:
          - <REPLACE WITH SSH PUBLIC KEY FOR THE USER>
        state: present
        sudoer: true

docker-manager-first:
  hosts:
    <REPLACE WITH THE SERVER HOSTNAME>:
      ansible_host: '<IP ADDRESS FOR THE SERVER>'
      data_label: data1

# QA and staging servers are not configured to use workers.
docker-workers: {}

Next, observe the qa.yml file ...

Once again, add SSH public keys for super admins and amend the hostname and IP address as above.

There are some optional blocks here depending on what you need to do.

In our example setup, we are repurposing our QA server as the VPN & "jump" server.

You will see the option to add a "jump" user that doesnt require 2FA to connect, as this user will be used by the Github action to then SSH into downstream servers using their "provision" user. So you must paste in the public keys for each relevant server.

If you are using your own VPN and you have your own jump server in your network, you can delete this block.

- name: jump
    state: present
    sudoer: false
    two_factor: false
    ssh_keys:
        - <Here you must paste the public keys for the provision user for other servers>

You will also notice this block. This is used by some countries who wish to repurpose the QA server as a backup server in order to reduce costs. If you have a separate backup server, you can delete this block.

additional_keys_for_provisioning_user

Next, observe the staging.yml file ...

Add SSH public keys for super admins and amend the hostname and IP address as above.

A staging server is used as a pre-production environment that is a mirror of production. Production backs up every night to the backup server, and the staging server restores the previous days backup onto it.

In this way we know that:

  1. Production data is safely backed up and restorable

  2. We have an environment we can deploy to for final Quality Assurance to ensure that no upgrade negatively affects production data. Indeed we can use staging to test that database migrations run successfully on real data before deploying to production.

Note these variables:

"enable_backups" is set to false on staging. Staging will not backup data.

"periodic_restore_from_backup" is set to true on staging. Staging will only restore backup data from production.

only_allow_access_from_addresses:
    - <REPLACE WITH WHITELIST OF IPS / JUMP SERVER IP etc>
enable_backups: false
periodic_restore_from_backup: true

Note the following variable:

It is required to be manually set here again because unfortunately Github doesnt allow us to configure Ansible core to allow variables to be passed to inventory files. You can remove these args if you are not using a "jump" server.

ansible_ssh_common_args: '-J jump@<REPLACE WITH YOUR JUMP SERVER IP> -o StrictHostKeyChecking=no'

Note the backups block which is set on the staging server so that the staging server has access to the backup environment to retrieve daily backups.

backups:
  hosts:
    <REPLACE WITH THE BACKUP SERVER HOSTNAME>:
      ansible_host: '<IP ADDRESS FOR THE BACKUP SERVER>'

Next, observe the production.yml file ...

The variables are similar to the staging environment. Notice that the "enable_backups" variable is set to true as this environment will backup every day.

enable_backups: true

Production servers can be deployed across a cluster of 2, 3 or 5 servers. If you are deploying to a cluster then you must add a docker worker for each additional server in your cluster:

docker-workers:
  hosts:
    <REPLACE WITH THE WORKER SERVER HOSTNAME>:
      ansible_host: 'REPLACE WITH THE WORKER SERVER IP'
      data_label: data2 (Note: this must be unique for every worker)
      ansible_ssh_common_args: ''

Commit the inventories and run the provision action for each environment

Amend the inventory files as appropriate and commit the files to Git.

If you are going to use the QA server as a jump server, then you should provision the QA server first.

Set-up 2FA SSH access for all your super admins

Now that your servers are provisioned and A records exist for them, you can SSH in using either the IP address or the domain, plus your super admin username.

SSH'ing into a production or staging server that has been configured with a "jump" server, will require you to pass the arguments appropriate to your setup.

You will be asked to set up 2FA with Google Authenticator. You must have the Google Authenticator app on your mobile phone.

Scan the QR code then enter the 6-digit 2FA code to access the server.

For all the questions that are asked, accept defaults by typing "y"

You will also notice that root SSH access is now disabled as a security posture.

Now that all your servers are provisioned, you are ready to deploy OpenCRVS provided your country confguration Docker container image has pushed to Docker successfully.

"only_allow_access_from_addresses" secures your server from SSH access only from a whitelist of servers such as Github's servers (IPs used by Github Actions), a static IP address for your super server admins, or a "jump" server IP.

"ansible_ssh_common_args" contains the same command as SSH_ARGS used in .

Ansible
YAML inventory file
meta
step 3.3.2
QR Code for Google Authenticator
Accept defaults