You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the definition of average precision from wikipedia or sklearn, AP computes the average value of p(r) over the interval from r=0 to r=1, i.e., the area under precision-recall curve.
However, the following code calculates the mean of precision directly, i.e., the area under p-k curve.
Thank you so much for pointing it out. We directly use the evaluation metrics from the code in another paper which is a baseline method to compare with. We will add more clarification about it and allow users to customize the evaluation metrics when officially releasing this repo later.
According to the definition of average precision from wikipedia or sklearn, AP computes the average value of p(r) over the interval from r=0 to r=1, i.e., the area under precision-recall curve.
However, the following code calculates the mean of precision directly, i.e., the area under p-k curve.
starmie/checkPrecisionRecall.py
Lines 90 to 94 in 00cae40
I believe that this may not affect the overall trend of the different methods and the contribution of this paper, just a friendly reminder :)
The text was updated successfully, but these errors were encountered: