Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Record stats on fitting solution run? #163

Open
TorkelE opened this issue Feb 5, 2024 · 4 comments
Open

Record stats on fitting solution run? #163

TorkelE opened this issue Feb 5, 2024 · 4 comments
Milestone

Comments

@TorkelE
Copy link
Contributor

TorkelE commented Feb 5, 2024

E.g. DifferentialEquations record various data of the simulation (like the number f function evaluations):
image
While not directly needed, it can be useful for debugging purposes, or analysis of performance, and similar.

Stuff which it could be useful to know:

  • Number of function evaluations (of the cost function).
  • Number of occurrences of various warnings/simulations problems (e.g. maxiters reached, instability detected, and similar).

Another advantage of keeping this is that, rather than notifying the user of problems by displaying DiffEq warning messages, one could provide a summary at the end (auto display of diffeq warnings could still be an option). E.g. if the number of warnings of a certain type exceeds some number (e.g. 0), then at the end a summary is printed:

Warning: A large number of simulations encountered problems
Number of model simulations 1000
Number of maxiters reached: 96
@sebapersson
Copy link
Owner

I think this is a good idea.

Things like number of function evaluations and gradient evaluations should typically be on the domain of the optimization algorithm, however not every algorithms records this, and more details like how the ODE solver failed can be really useful for getting info on whether or not it is worthwhile to switch solver.

How about recording number of cost function, gradient and Hessian evaluations, along with number of warnings for each? (for really fun models the cost might evaluate, but for example the gradient can fail).

Would probably also need a state reset function every time a new optimization problem is built.

@TorkelE
Copy link
Contributor Author

TorkelE commented Feb 6, 2024

Yes, something like that sounds good. A little bit unsure about the state reset function? Do You mean for resetting these statistics or something else? Wouldn't these statistics be stored in the optimisation output structure (and whatever thing keeps track of the optimisation during the optimisation process)?

@sebapersson
Copy link
Owner

You are correct (somehow I imagined they should be in the PEtabODEProblem), but in the optmisation ouput structure makes far more sense!

@sebapersson sebapersson added this to the 3.0 milestone Apr 26, 2024
@sebapersson
Copy link
Owner

I have changed the milestone to v4, as this would be great to have, but I do not want to hold back PEtab v3.

@sebapersson sebapersson modified the milestones: 3.0, 4.0 Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants