Precision tell us the fraction of positive predictions that are correct. It takes into account only the positive class (TP and FP - second column of the confusion matrix), as is stated in the following formula:
Recall measures the fraction of correctly identified postive instances. It considers parts of the positive and negative classes (TP and FN - second row of confusion table). The formula of this metric is presented below:
In this problem, the precision and recall values were 67% and 54% respectively. So, these measures reflect some errors of our model that accuracy did not notice due to the class imbalance.
MNEMONICS:
-
Precision : From the
pre
dicted positives, how many we predicted right. See how the wordpre
cision is similar to the wordpre
diction? -
Recall : From the
real
positives, how many we predicted right. See how the wordre
cal
l is similar to the wordreal
?
Add notes from the video (PRs are welcome)
The notes are written by the community. If you see an error here, please create a PR with a fix. |