-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calibration part in exemplar-svm #57
Comments
There are two different types of calibration which can be used:
|
Thanks a lot Madhulika On Wed, May 9, 2012 at 8:50 PM, Tomasz Malisiewicz <
|
I hope I'm not too late to ask. Quantombone, you state that to calibrate a svm, you have to fit a sigmoid function to the responses of negatives and positives of that specific svm. Did I understand correctly? Let S_pos be the set of responses of the positive instances, and S_neg the set of responses of the negative instances. Solve this over-determined system and get alpha and beta. Is this correct so far? Once I have alpha and beta, how do I change w to shift the decision boundary? |
@asundrajas, you have the correct intuition for obtaining alpha and beta. To shift the decision boundary, just create a new classifier with weights w' and offset b', such that applying the classifier to x is done as follows: f(x) = w'*x+b' So simply put, w' = alpha*w, b' = b. |
Sir, I am trying to implement the exemplar svm and trying to modify it. Actually I m stuck in the calibration portion. What I have done is for each positive instance train an exemplar using that positive as positive and negatives are chosen by hard negative mining from all the negative instances. I have used Cp as 0.5 and Cn as 0.01 as suggested by your paper. But when I am checking how well it performs on the validation set, the scores are mostly negative(even for the positive instances) due to training with so many negative examples. So after the calibration method that you've suggested, I am getting such alpha beta that increased my exemplar score very much. As a result in the test set it also classifies the negative ones as positive. Did I do something wrong? For calibration, I first checked score for the in-class as well as out-class instances and learnt a sigmoid from it. I have also checked only the in-class exemples and based on the overlap score learnt a sigmoid. But still in both cases, the classifiers classifies negative ones as positive ie they give positive response for negative instances also. Can you please tell me, why I am not getting the desired output? Thanks and regards, |
Hi Anjan, Before calibration you should expect the scores of the classifier to be negative, even on the positives. What you are observing matches my intuition. When testing, you shouldn't worry about the score of a single detection score in isolation. The only thing that's important is having the good detections score higher than the bad detections. In other words, the evaluation should be performed on a medium/large dataset and a precision recall curve should be computed. When using platt's calibration method to adjust the ExemplarSVMs, you'll need to make sure that the raw ExemplarSVMs were trained long enough. During my PhD I observed some funky behavior when training on small datasets -- if the number of negatives is too small then the raw ExemplarSVMs are just too weak. I won't be able to into any more specific details. Good luck! |
Thanks a lot, Sir. Anjan Banerjee |
I am unable to understand how the boundary of single exemplar classifier is shifted and scaled in calibration. Could you please elaborate?
Thanks
The text was updated successfully, but these errors were encountered: