title | abstract | video | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Test Sample Accuracy Scales with Training Sample Density in Neural Networks |
Intuitively, one would expect accuracy of a trained neural network’s prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error smoothed across linear activation regions scales inversely with training sample density in representation space. Empirically, we verify this bound is a strong predictor of the inaccuracy of the network’s prediction on test samples. For unseen test sets, including those with out-of-distribution samples, ranking test samples by their local region’s error bound and discarding samples with the highest bounds raises prediction accuracy by up to 20% in absolute terms for image classification datasets, on average over thresholds. |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
ji22a |
0 |
Test Sample Accuracy Scales with Training Sample Density in Neural Networks |
629 |
646 |
629-646 |
629 |
false |
Ji, Xu and Pascanu, Razvan and Hjelm, R. Devon and Lakshminarayanan, Balaji and Vedaldi, Andrea |
|
2022-11-28 |
Proceedings of The 1st Conference on Lifelong Learning Agents |
199 |
inproceedings |
|