This project implements a Loss to Follow Up (LTFU) workflow system for CHIS based on OpenHIE LTFU Guide.
A first version of the project can be found in the chis-interoperability repository.
The components and reference information for interoperability used in this project are:
- OpenHIE defines the architecture for an interoperability layer.
- OpenHIM is a middleware component designed to ease interoperability between systems.
- HL7 FHIR is a messaging format to allow all systems to understand the format of the message.
The structure of documents in the CHT database reflect the configuration of the system, and therefore do not map directly to a FHIR message format. To achieve interoperability we used a middleware to convert the CHT data structure into a standardized form so the other systems can read it. Below is the standard data workflow:
This project uses OpenHIM as the middleware component with Mediators to do the conversion. Outbound Push is configured to make a request to the middleware when relevant documents are created or modified in the CHT. A Mediator then creates a FHIR resource which is then routed to OpenHIM. OpenHIM routes the resource to any other configured systems.
Conversely, to bring data into the CHT, OpenHIM is configured to route the updated resource to the Mediator, which then calls the relevant CHT APIs to update the document in the CHT database. This will then be replicated to users’ devices as per usual.
See more information on the CHT interoperability page on the CHT documentation site.
GitHub repository for the kubernetes configuration.
The following FHIR Resources are used to implement the flow above:
The workflow guide explains the steps for testing the LTFU workflow using the local setup and the Medic instances. For the local setup, you are expected to have already completed the setup procedures outlined in this document in the section below.
docker
Postman
or similar tool for API testing. This will play the role of theRequesting System
from the sequence diagram above.
Users getting errors when running the following installation steps, please see the Troubleshooting guide.
- Run
./startup.sh init
to start-up the docker containers on the first run or after calling./startup.sh destroy
. Use./startup.sh up
for subsequent runs after callinginit
without callingdestroy
.
-
Visit the OpenHIM Admin Console at http://localhost:9000 and login with the following credentials: email -
[email protected]
and password -interop-password
. The default User username for OpenHIM is[email protected]
and password isinterop-password
. The default Client username isinterop-client
and password isinterop-password
. -
Once logged in, visit http://localhost:9000/#!/mediators and select the only mediator with the
Name
'Loss to Follow Up Mediator'. -
Select the green
+
button to the right of the default channel to add the mediator. -
You can test the mediator by running:
curl -X GET http://localhost:5001/mediator -H "Authorization: Basic $(echo -n interop-client:interop-password | base64)"
You should get as a response:
{ "status": "success" }
If everything is successful you should see this:
The following steps apply when running CHT via the Docker setup provided in this repository:
- CHT can be accessed via
http://localhost:5988
, and the credentials areadmin
/password
. - Create a new user in the CHT instance with the username
interop-client
using these instructions. For the role you can selectData entry
andAnalytics
roles. Please note that you can use any username you prefer but you would have to update the config with the new username. You can do that by editing thecht-config/app_settings.json
file and updating theusername
value in theoutbound
object e.g. on this line. - Securely save the
interop-client
user's password to the database using the instructions here. Change the valuesmykey
andmy pass
toopenhim1
and your user's password respectively. An example of the curls request is below:
curl -X PUT -H "Content-Type: text/plain" http://admin:password@localhost:5988/api/v1/credentials/openhim1 -d 'interop-password'
The following steps apply when running CHT locally in development mode and when making configuration changes locally:
- Set up a local CHT instance using these instructions.
- Create a new user in the CHT instance with the username
interop-client
using these instructions. For the role you can selectData entry
andAnalytics
roles. Please note that you can use any username you prefer but you would have to update the config with the new username. You can do that by editing thecht-config/app_settings.json
file and updating theusername
value in theoutbound
object e.g. on this line. - Securely save the
interop-client
user's password to the database using the instructions here. Change the valuesmykey
andmy pass
toopenhim1
and your user's password respectively. An example of the curls request is below:
curl -X PUT -H "Content-Type: text/plain" http://admin:password@localhost:5988/api/v1/credentials/openhim1 -d 'interop-password'
- After updating the mediator code or cht configuration, you need to run
./startup.sh up-dev
to upload the changes to docker compose.
- Go into the
cht-config
directory by runningcd cht-config
. - Run
npm install
to install the dependencies. - Create a file named
.env
under/mediator
folder, copy over the contents of/mediator/.env.template
and update theCHT_USERNAME
andCHT_PASSWORD
values with the admin credentials of your CHT instance. - Set up a proxy to your local CHT instance by running using something like nginx-local-ip or ngrok and update the
CHT_URL
value in the.env
file with the new URL. - Ensure you have cht-conf installed and run
cht --local
to compile and upload the app settings configuration to your local CHT instance. - To verify if the configuration is loaded correctly is to create a
Patient
and to access a URL likehttps://*****.my.local-ip.co/#/contacts/patientUUID/report/interop_follow_up
. This should retrieve correctly the follow up form. - To verify if the configuration in CouchDB, access
http://localhost:5984/_utils/#database/medic/settings
.
- To shut-down the containers run
./startup.sh down
to stop the instances. - To then restart the containers, run
./startup.sh up
. You do not need to runinit
again like you did in the initial install above. - To shut-down and delete everything, run
./startup.sh destroy
. You will have to subsequently run./startup.sh init
if you wish to start the containers.