Skip to content

OWL Service Providers

Dazhi Jiao edited this page Oct 11, 2019 · 4 revisions

In-memory OWL service provider

As a proof of concept, we first implemented an in-memory OWL service provider and tested it with the STATO ontology. The goal is to find a quick way to verify that it's possible to build CodeSystem from an OWL ontology, with minimal consideration of the performance of loading the OWL ontology. This quick and dirty approach allowed us to implement an FHIR terminology service with minimal functionalities and proved that it's feasible to create a wrapper-based FHIR terminology service using the HAPI-FHIR API library.

The STATO ontology is relatively small (888 terms in version RC1.4). However, while loading the ontology from a PURL (http://purl.obolibrary.org/obo/stato.owl) takes less than 2s, the transformation of the OWLOntology object to CodeSystem with OWLAPI takes more than 20s. (Tested on a 32G RAM, 16-core Mac Notebook Pro).

Another limitation of the in-memory approach is that when there're multiple ontologies, each ontology has to be loaded separately into an OWLOntology object. This makes it very hard to explore cross different ontologies or constructs a bigger graph with multiple ontologies.

Ontotext GraphDB

GraphDB is a triple store with advanced features such as reasoning, full-text indexing, and visualization. Ontologies can be imported as RDF triples. GraphDB provides a free version with performance limitations for high-demand systems. For testing purposes even with a large dataset, it's a good choice.

Neo4J

Clone this wiki locally