Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the bnPU loss #3

Open
YangXuefeng opened this issue Sep 3, 2019 · 1 comment
Open

about the bnPU loss #3

YangXuefeng opened this issue Sep 3, 2019 · 1 comment

Comments

@YangXuefeng
Copy link

Thanks for the paper and code !

The calculation of risk in bnPU setup is a little confusing.
In the paper, the non-negative makes the Risk = Pi * Prisk + max(0, nRisk)
However, when the nRisk < self.beta, the risk = -self.gamma * nRisk in following code.
Could you please explain why the risk is calculated as this when the nRisk is smaller than a small beta ? I can not match the code with the equations in paper

hP = result.masked_select(torch.from_numpy(postive).byte().cuda()).contiguous().view(-1, 2)
hU = result.masked_select(torch.from_numpy(unlabeled).byte().cuda()).contiguous().view(-1, 2)
if len(hP) > 0:
pRisk = self.model.loss_func(1, hP, args.type)
else:
pRisk = torch.FloatTensor([0]).cuda()
uRisk = self.model.loss_func(0, hU, args.type)
nRisk = uRisk - self.prior * (1 - pRisk)
risk = self.m * pRisk + nRisk

    if args.type == 'bnpu':
        if nRisk < self.beta:
            risk = -self.gamma * nRisk
@iambabao
Copy link

I think this paper may help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants