You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some/most circumstances the evaluation result for a single task, algorithm and case consists of multiple metrics such as Dice Similarity Coefficient, Hausdorff Distance, Average Distance, Precision, Recall, etc.
Do you recommend to generate a report for each individual metric? It seems that I'm missing something because with this approach the visualisation does not represent the overall result taking all metrics into consideration.
Best regards,
Roman Niklaus
The text was updated successfully, but these errors were encountered:
a current workaround if in all your metrics small is better (small is worse), you can treat the metric as a task (otherwise you can reverse those metrics which have a different direction).
However, I would be careful with this: You may not want to give all metrics the same weight, you may consider some metrics more important than others. Further, metrics aim at describing specific properties (take precision and recall) which you may want to be able to communicate separately. Finally consensus rankings (rank aggregations) impose additional assumptions and different methods to obtain them may lead to different rankings (i.e. the ranking method used within each metric matters as well as the method how to combine the resulting rankings), e.g. if you take the average rank across metrics or the median rank across metrics may lead to something different. Just something to keep in mind...
Dear @wiesenfa
In some/most circumstances the evaluation result for a single task, algorithm and case consists of multiple metrics such as Dice Similarity Coefficient, Hausdorff Distance, Average Distance, Precision, Recall, etc.
Do you recommend to generate a report for each individual metric? It seems that I'm missing something because with this approach the visualisation does not represent the overall result taking all metrics into consideration.
Best regards,
Roman Niklaus
The text was updated successfully, but these errors were encountered: