OpenCRVS
v1.5
v1.5
  • 👋Welcome!
  • CRVS Systems
    • Understanding CRVS
    • Effective digital CRVS systems
    • OpenCRVS within a government systems architecture
    • OpenCRVS Value Proposition
  • Product Specifications
    • Functional Architecture
    • Workflow management
    • Status Flow Diagram
    • Users
      • Examples
    • Core functions
      • 1. Notify event
      • 2. Declare event
      • 3. Validate event
      • 4. Register event
      • 5. Print certificate
      • 6. Issue certificate
      • 7. Search for a record
      • 8. View record
      • 9. Correct record
      • 10. Verify record
      • 11. Archive record
      • 12. Vital statistics export
    • Support functions
      • 13. Login
      • 14. Audit
      • 15. Deduplication
      • 16. Performance management
      • 17. Payment
      • 18. Learning
      • 19. User support
      • 20. User onboarding
    • Admin functions
      • 21. User management
      • 22. Comms management
      • 23. Content management
      • 24. Config management
    • Data functions
      • 25. Legacy data import
      • 26. Legacy paper import
  • Technology
    • Architecture
      • Performance tests
    • Standards
      • FHIR Documents
        • Event Composition
        • Person
        • Registration Task
        • Event Observations
        • Locations
    • Security
    • Interoperability
      • Create a client
      • Authenticate a client
      • Event Notification clients
      • Record Search clients
      • Webhook clients
      • National ID client
      • FHIR Location REST API
      • Other ways to interoperate
  • Default configuration
    • Intro to Farajaland
    • Civil registration in Farajaland
    • OpenCRVS configuration in Farajaland
      • Application settings
      • User / role mapping
      • Declaration forms
      • Certificate templates
    • Business process flows in Farajaland
  • Setup
    • 1. Planning an OpenCRVS Implementation
    • 2. Establish project and team
    • 3. Gather requirements
      • 3.1 Mapping business processes
      • 3.2 Mapping offices and user types
      • 3.3 Define your application settings
      • 3.4 Designing event declaration forms
      • 3.5 Designing a certificate template
    • 4. Installation
      • 4.1 Set-up a local development environment
        • 4.1.1 Install the required dependencies
        • 4.1.2 Install OpenCRVS locally
        • 4.1.3 Starting and stopping OpenCRVS
        • 4.1.4 Log in to OpenCRVS locally
        • 4.1.5 Tooling
          • 4.1.5.1 WSL Support
      • 4.2 Set-up your own, local, country configuration
        • 4.2.1 Fork your own country configuration repository
        • 4.2.2 Set up administrative address divisions
          • 4.2.2.1 Prepare source file for administrative structure
          • 4.2.2.2 Prepare source file for statistics
        • 4.2.3 Set up CR offices and Health facilities
          • 4.2.3.1 Prepare source file for CRVS Office facilities
          • 4.2.3.2 Prepare source file for health facilities
        • 4.2.4 Set up employees & roles for testing or production
          • 4.2.3.1 Prepare source file for employees
          • 4.2.3.2 Configure role titles
        • 4.2.5 Set up application settings
          • 4.2.5.1 Managing language content
            • 4.2.5.1.1 Informant and staff notifications
          • 4.2.5.2 Configuring Metabase Dashboards
        • 4.2.6 Configure certificate templates
        • 4.2.7 Configure declaration forms
          • 4.2.7.1 Configuring an event form
        • 4.2.8 Seeding & clearing your local databases
        • 4.2.9 Countryconfig API endpoints explained
      • 4.3 Set-up a server-hosted environment
        • 4.3.1 Verify servers & create a "provision" user
        • 4.3.2 TLS / SSL & DNS
          • 4.3.2.1 LetsEncrypt https challenge in development environments
          • 4.3.2.2 LetsEncrypt DNS challenge in production
          • 4.3.2.3 Static TLS certificates
        • 4.3.3 Configure inventory files
        • 4.3.4 Create a Github environment
          • 4.3.4.1 Environment secrets and variables explained
          • 4.3.4.2 VPN Recipes
        • 4.3.5 Provisioning servers
          • 4.3.5.1 SSH access
          • 4.3.5.2 Building, pushing & releasing your countryconfig code
          • 4.3.5.3 Ansible tasks when provisioning
        • 4.3.6 Deploy
          • 4.3.6.1 Running a deployment
          • 4.3.6.2 Seeding a server environment
          • 4.3.6.3 Login to an OpenCRVS server
          • 4.3.6.5 Resetting a server environment
        • 4.3.7 Backup & Restore
          • 4.3.7.1 Restoring a backup
          • 4.3.7.2 Off-boarding from OpenCRVS
    • 5. Functional configuration
      • 5.1 Configure application settings
      • 5.2 Configure registration periods and fees
      • 5.3 Managing system users
    • 6. Quality assurance testing
    • 7. Go-live
      • 7.1 Pre-Deployment Checklist
    • 8. Operational Support
    • 9. Monitoring
      • 9.1 Application logs
      • 9.2 Infrastructure health
      • 9.3 Routine monitoring checklist
      • 9.4 Setting up alerts
      • 9.5 Managing a Docker Swarm
  • General
    • Community
    • Contributing
    • Releases
      • v1.5.1: Release notes
      • v1.5.0: Release notes
      • v1.4.1: Release notes
      • v1.4.0 to v1.4.1 Migration notes
      • v1.4.0 Release notes
      • v1.3.* to v1.4.* Migration notes
      • v1.3.5: Release notes
      • v1.3.4: Release notes
      • v1.3.3: Release notes
      • v1.3.1: Release notes
      • v1.3.0: Release notes
      • v1.2.1: Release notes
      • Patch: Elasticsearch 7.10.2
      • v1.2.0: Release notes
      • v.1.1.2: Release notes
      • v.1.1.1: Release notes
      • v1.1.0: Release notes
    • Roadmap
Powered by GitBook
On this page
  • Configure the Ansible inventory files for your environments
  • Starting with the development.yml file ...
  • Next, observe the qa.yml file ...
  • Next, observe the backup.yml file ...
  • Next, observe the staging.yml file ...
  • Finally, observe the production.yml file ...
  1. Setup
  2. 4. Installation
  3. 4.3 Set-up a server-hosted environment

4.3.3 Configure inventory files

Previous4.3.2.3 Static TLS certificatesNext4.3.4 Create a Github environment

Last updated 7 months ago

is run by the "Provision environment" Github Action to install on your server all the software dependencies, configure scheduled tasks such as automated backup, install and configure a firewall.

The pipeline will also secure your server by only allowing your defined system administrator users SSH access to the servers using two factor authentication.

An automated script will create a Github environment to safely store application secrets and supply them to the provision action. Before running the script, you need to edit the Ansible inventory files and commit them to your repository.

Configure the Ansible inventory files for your environments

Look in the infrastructure/server-setup directory and you will see the following files:

development.yml

qa.yml

staging.yml

production.yml

backup.yml

Starting with the development.yml file ...

If you are just wanting to set up a single server for training purposes, the file you need to edit is the development.yml file

In the users block you can add SSH public keys for server super admins. These are individuals in your organisation who will be able to SSH into the environment for debugging purposes. Ansible will create these users and set up 2FA authentication for them when they connect.

You must also replace the server hostname and the IP address for the development server.

all:
  vars:
    users:
      # @todo this is where you define which development team members have access to the server.
      # If you need to remove access from someone, do not remove them from this list, but instead set their state: absent
      - name: <REPLACE WITH A USERNAME FOR YOUR SSH USER>
        ssh_keys:
          - <REPLACE WITH THE USER'S PUBLIC SSH KEY>
        state: present
        sudoer: true
    enable_backups: false
docker-manager-first:
  hosts:
    <REPLACE WITH THE SERVER HOSTNAME>: # @todo set this to be the hostname of your target server
      ansible_host: '<REPLACE WITH THE IP ADDRESS>' # @todo set this to be the IP address of your server
      data_label: data1 # for manager machines, this should always be "data1"

# Development servers are not configured to use workers.
docker-workers: {}

Next, observe the qa.yml file ...

The "qa" environment is a server used for quality assurance purposes.

Once again, add SSH public keys for super admins and amend the hostname and IP address as above.

Next, observe the backup.yml file ...

OpenCRVS is configured to save a back up of citizen data from production into a backup server every night, and restore the backup onto staging. This ensures that your citizen data is securely backed up in an encrypted file and is restorable to an environment that you can use for pre-production testing with real citizen data (staging).

Once again, add SSH public keys for super admins and amend the hostname and IP address for the backup server as above.

The following variable defines how many days of backup will be retained on the backup server. By default we set this to 7 days to optimise diskspace on the server.

amount_of_backups_to_keep

The following variable allows you to customise the directory where backups will be stored.

backup_server_remote_target_directory
all:
  vars:
    # @todo how many days to store backups for?
    amount_of_backups_to_keep: 7
    backup_server_remote_target_directory: /home/backup/backups
    users:
      # @todo this is where you define which development team members have access to the server.
      # If you need to remove access from someone, do not remove them from this list, but instead set their state: absent
      - name: <REPLACE WITH A USERNAME FOR YOUR SSH USER>
        ssh_keys:
          - <REPLACE WITH THE USER'S PUBLIC SSH KEY>
        state: present
        sudoer: true

backups-host:
  hosts:
    <REPLACE WITH THE SERVER HOSTNAME>: # @todo set this to be the hostname of your target server
      ansible_host: '<REPLACE WITH THE IP ADDRESS>'

Next, observe the staging.yml file ...

Add SSH public keys for super admins and amend the hostname and IP address for the staging server as above.

The following block should be edited with your backup server details so that the staging server can access the backup server programatically.

...
backups:
  hosts:
    <REPLACE WITH THE BACKUP SERVER HOSTNAME>: # @todo set this to be the hostname of your backup server
      ansible_host: '<REPLACE WITH THE BACKUP SERVER IP ADDRESS>' # set this to be the IP address of your backup server
      # Written by provision pipeline. Assumes "backup" environment
      # exists in Github environments
      ansible_ssh_private_key_file: /tmp/backup_ssh_private_key

A staging server is used as a pre-production environment that is a mirror of production. Production backs up every night to the backup server, and the staging server restores the previous days backup onto it.

In this way we know that:

  1. Production data is safely backed up and restorable

  2. We have an environment we can deploy to for final Quality Assurance to ensure that no upgrade negatively affects production data. Indeed we can use staging to test that database migrations run successfully on real data before deploying to production.

Note these variables:

"enable_backups" is set to false on staging. Staging will not backup data.

"periodic_restore_from_backup" is set to true on staging. Staging will restore backed up data from production.

Finally, observe the production.yml file ...

The variables are similar to the staging environment. Notice that the "enable_backups" variable is set to true as this environment will backup every day.

enable_backups: true

Production servers can be deployed across a cluster of 2, 3 or 5 servers. If you are deploying to a cluster then you must add a docker worker for each additional server in your cluster:

docker-workers:
  hosts:
    <REPLACE WITH THE WORKER SERVER HOSTNAME>:
      ansible_host: 'REPLACE WITH THE WORKER SERVER IP'
      data_label: data2 (Note: this must be unique for every worker)
      ansible_ssh_common_args: ''

If your production cluster contains only one server, you can replace the docker-workers block like this:

# This production cluster is configured to only use one server
docker-workers: {}

Commit the inventory file changes to your Github repository before proceeding.

"only_allow_access_from_addresses" secures your server from SSH access only from a whitelist of servers such as Github's servers (IPs used by Github Actions), a static IP address for your super server admins, or a "jump" server IP.

Ansible
meta