These are the steps you need to perform after receiving a server IP address and an SSH user before you can run the provisioning scripts for any given environment. E.G: qa, backup, staging, production (1, 2, 3 or 5 server cluster).
Verify the Ubuntu version is 24.04
First, login as root, or if you only have sudoer access, do sudo su root.
riku@farajaland-prod:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04 LTS
Release: 24.04
...
If not, either recreate the server or upgrade Ubuntu.
Verify the disk has been partitioned correctly and that you have enough space for your chosen environment
We want to ensure the partition mounted to / has enough disk space. In this example output, 105GB is available out of a total of 311GB.
Check the internet connection from the servers.
Check that the servers have internet connectivity. The servers must be able to access Dockerhub, Sentry and other internet services such as Ubuntu update repositories, Email & SMS apis for example. Therefore check if you can ping google.com from inside the servers.
If your VPN requires a whitelist of allowed domains, the following are the known domains which the servers require access to:
Create a user named provision
The next commands will create a user named provision,make it a sudoer (needed for provisioning) and finally generate an SSH key for logging in as the user. The SSH private key will not persist on the server as it should only be stored in Github Secrets.
Create SSH keys for each environment for provision
The deploy Github Action uses this library to SSH into your environments. This library depends on an PEM(RSA), PKCS8, and RFC4716(OpenSSH) SSH key.
For production servers, SSH keys should only be created for the manager node.
After running the commands, you see the SSH private key in the terminal window. It will look like this:
Copy this key and save it into secure password manager software. It is the private key used by the provision user to SSH into the servers automatically from Github environments. We will use it when setting up our Github environments.
Note: You will need password manager software such as Bitwarden or 1Password to safely store OpenCRVS secrets and manage them in line with your internal data security policies.
For additional replicas (worker) servers in a production cluster
SSH into the production manager node and copy the public key for the provision user.
SSH into the worker node and create the provision user and an SSH key just as you did previously. After that, open up provision user's authorized_keys and place the public key copied from the manager node there
Paste in the public key for the manager node provision user. The end result should look similar to this
Exit & save.
Next, you need to consider how your servers are networked, and how you plan to generate TLS. A lot depends on your VPN approach.
archive.ubuntu.com
changelogs.ubuntu.com
hub.docker.com
auth.docker.io
registry-1.docker.io
download.docker.com
sentry.io
fonts.gstatic.com
storage.googleapis.com
fonts.googleapis.com
github.com
acme-v02.api.letsencrypt.org (if using LetsEncrypt TLS certs)
registry.npmjs.org
registry.yarnpkg.com
eu.ui-avatars.com
... Other domains may be required depending on your configuration
adduser --gecos "OpenCRVS Provisioning user" --disabled-password provision
usermod -aG sudo provision
echo 'provision ALL=(ALL) NOPASSWD:ALL' | sudo tee -a /etc/sudoers
su provision
cd ~
mkdir -p /home/provision/.ssh
ssh-keygen -t rsa -f /tmp/ssh-key -N ""
cat /tmp/ssh-key.pub >> /home/provision/.ssh/authorized_keys
chmod 600 /home/provision/.ssh/authorized_keys
echo -e "\n\nThis is the SSH_KEY you add to Github Environments:\n\n"
cat /tmp/ssh-key
rm /tmp/ssh-key*