-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement minimal MEMOTE scoring #3
Comments
I started working on this based on the branch |
NIce job @famosab!
The annotation becomes highly relevant as soon as one wants to fetch data from a database based on the annotation, or when integrating omics datasets. Going forward I think using the Docker image of I'm now ollowing this suggestion to use the Memote test results in JSON format, to see if that is something compatible with the current setup. |
Using the docker image seems to be useful. As far as I know there is a possibility to integrate the build within the github action, as described here. It still seems to me that using the json output might help in sorting out which results we want to look at (which is nice) but memote would still be required to run all tests. Especially if we use it with docker. That means it will take some time per model. But that should be fine since we plan to run it as a cron job anyways. Regarding the annotations: we could either check for all of them (which is quite costly) or we define the subset of databases which seem to be the most up-to-date. For example the BiGG database has not been updated since 2019 while the ChEBI database was updated on Feb 2 this year. |
This has been achieved through #9.
The current implementation combines the
|
The MEMOTE tests used in validation has changed, and its revision is highlighted through a new issue #12, so that the current issue can be completed. |
The pipeline should test the model with MEMOTE.
The text was updated successfully, but these errors were encountered: