Skip to content

3.5 Adding a new adapter

pdotter edited this page Feb 8, 2024 · 21 revisions

Adding a new Adapter

This guide is explaining how to add a new Adapter. It will be describe how we added h2o as a new AutoML solution to OMA-ML.
As a first step copy another Adapter and rename it for your specific solution. We used AutoKeras. It didn't seem to have a lot of special behaviour.
In the following *filename means YOUR_AUTOMLfilename.

Testing the AutoML solution

First we created a running docker container. H2O has a test run to check whether everything is working. You can just replace the code of main() in *Server.py by some test code. This way you can check easily whether all dependencies are satisfied and the container is working fine and confirm the AutoML-Solution is running at all before adjusting the controller etc.
Obviously you can also do it by using the .venv. Add the requirements to the requirements.txt. \

Changes to the adapter folder

For the adapter most changes have to be made in the /AutoMLs folder. *Adapter.py is responsible for the training of your model. The *ParameterConfig is mapping the parameter names of the ontology to the parameters of the specific AutoML solution. The *Wrapper includes functionality to make predictions.
For the Wrapper there is a python script created to automatically do predictions with a model. You have to change this in /app-data/templates/predict.ji.
In /config you should update the adapter-name, job-file-name and export-zip-file-name.
In /dependancy-injection/Container.py you have to switch the used *AdapterManager.
In AutoML.py you have to switch the used *Adapter. TODO: Tests

Setup Ports

In /adapters/YOUR_AUTOML/.vscode/launch.json you have to change the GRPC_SERVER_PORT. Add a new *_SERVICE_HOST and *_SERVICE_PORT to /controller/.vscode/launch.json. For the docker environment you have to add the same values to the controller environment variables in the docker-compose.yml.
Add ":your_automl"(automl tag in the ontology): [*_SERVICE_HOST, *_SERVICE_PORT] to self.__automl_addresses in AdapterRuntimeManager.py and AdapterRuntimeExplainerDashboardManager.py. Of course you have to check that the port is not used yet.

Updating the Ontology

Our Notes - guide has to be extended
Activate in AutoML_solution: Already existing entry set supported_by_oma_ml to true
Add Configuration parameters (with version and description) in ML_Ontology and create a new sheet (H2O_config) (for the new TTL file).
Check usability of Configuration parameters
Setup Configuration parameters \

Updating the docker-compose.yml

As mentioned in Setup ports: Add a new service entry for the adapter in docker-compose.yml Copy an entry for the new autoML container and adjust it accordingly. (name, dockerfile, PORTS, training-volume)