6. Quality assurance testing
How to ensure your OpenCRVS configuration is fully tested and ready for live use?
OpenCRVS has been thoroughly tested to ensure that all functionality is reliable and it can work on a variety of devices and levels of connectivity. From a performance perspective the minimum server configuration has been successfully stress tested with loads of up to 200 birth declarations a minute, with multiple concurrent users experiencing response times consistently less than 5 seconds. Cyber-security penetration tests have also been carried out on the application with no significant vulnerabilities found.
Although the OpenCRVS Core Product has been rigorously tested, depending on your configuration (or customisations made) it is essential that you conduct your own lab testing prior to releasing the application to users for field testing and wider rollout.
Testing types
The following are a set of tests which you should consider running:
Product Test
Product tests are systematic procedures conducted to evaluate the functionality, usability, and reliability of the deployed software. These tests aim to ensure that the product meets the specified requirements and functions as per the configuration design.
Steps:
Identify test cases based on requirements and user stories.
Execute test cases to validate functionality, including inputs, outputs, and system behavior.
Document and report any defects found during testing.
Repeat testing iteratively as new features are added or changes are made.
User Acceptance Testing (UAT)
UAT is the final phase of testing where end-users validate the software to ensure it meets their requirements and expectations before deployment. It focuses on confirming that the system behaves as intended in real-world scenarios.
Steps:
Define acceptance criteria based on user requirements.
Invite stakeholders or representative users to perform testing.
Execute test cases covering typical user workflows and scenarios.
Gather feedback and address any issues or discrepancies identified.
Obtain formal approval from stakeholders to proceed with deployment.
Regression Tests
Regression tests verify that recent changes to the software have not adversely affected existing functionality. These tests help maintain product stability over time by ensuring that new features or bug fixes do not introduce unintended side effects.
Steps:
Develop a comprehensive suite of regression test cases covering critical functionalities.
Execute regression tests after each software update or change.
Automate repetitive regression tests to streamline the testing process.
Investigate and address any failures, either due to regression issues or changes in requirements.
Smoke Tests
Smoke tests are preliminary tests performed to verify basic functionality and stability of the software build. They aim to quickly identify major issues that could prevent further testing or deployment.
Steps:
Select a subset of essential test cases covering core features.
Execute smoke tests on each new build or deployment.
Verify basic functionalities such as login, navigation, and critical workflows.
If smoke tests pass, proceed with more extensive testing or progress with deployment; otherwise, halt further testing and address issues.
Performance Tests
Steps:
Define performance metrics and objectives based on user expectations.
Create test scenarios simulating realistic usage patterns and load conditions.
Use tools to simulate concurrent users, transactions, or data volume.
Measure and analyze system response times, throughput, and resource utilization.
Optimize resource usage to improve performance as needed.
Technical Tests
Technical tests, such as failover and backup procedures, ensure the reliability and availability of the software system in the event of hardware failures, disasters, or data loss.
Steps:
Develop failover and disaster recovery plans detailing procedures for system recovery.
Test failover mechanisms by intentionally simulating hardware failures or network disruptions.
Verify backup procedures by regularly backing up critical data and restoring from backups.
Document and update technical tests and procedures based on system changes or improvements.
Conduct periodic drills or tabletop exercises to validate the effectiveness of failover and backup procedures.
Penetration Testing:
Steps:
Define objectives, scope, and rules of engagement for the test.
Gather information about target systems and potential vulnerabilities.
Assess identified weaknesses for potential exploitation.
Attempt to exploit vulnerabilities to gain unauthorized access.
Document findings and provide recommendations for strengthening security.
Test cases
Raising OpenCRVS defects
Your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo.
Steps to complete a detailed defect report are as follows:
Describe the bug (a clear and concise description of what the bug is)
Which feature of OpenCRVS does your bug concern?
To reproduce (steps required to reproduce the behaviour) e.g.
Login as a Registrar
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behaviour (a clear and concise description of what you expected to happen)
Actual behaviour (describe what happened, including screenshots and video)
OpenCRVS Core Version e.g. v1.4.0 (Git branch: master / release-v1.4.0)
Country Configuration Version e.g. v1.4.0 (Git branch: master / release-v1.4.0)
Device
OS: [e.g. iOS]
Browser [e.g. chrome]
Version [e.g. 22]
Possible fixes (if you can, link to the line of code that might be responsible for the problem)