Skip to content

Latest commit

 

History

History
69 lines (69 loc) · 2.96 KB

2022-11-28-ostapenko22a.md

File metadata and controls

69 lines (69 loc) · 2.96 KB
title abstract video layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Continual Learning with Foundation Models: An Empirical Study of Latent Replay
Rapid development of large-scale pre-training has resulted in foundation models that can act as effective feature extractors on a variety of downstream tasks and domains. Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios. Our goal is twofold. First, we want to understand the compute-accuracy trade-off between CL in the raw-data space and in the latent space of pre-trained encoders. Second, we investigate how the characteristics of the encoder, the pre-training algorithm and data, as well as of the resulting latent space affect CL performance. For this, we compare the efficacy of various pre-trained models in large-scale benchmarking scenarios with a vanilla replay setting applied in the latent and in the raw-data space. Notably, this study shows how transfer, forgetting, task similarity and learning are dependent on the input data characteristics and not necessarily on the CL algorithms. First, we show that under some circumstances reasonable CL performance can readily be achieved with a non-parametric classifier at negligible compute. We then show how models pre-trained on broader data result in better performance for various replay sizes. We explain this with representational similarity and transfer properties of these representations. Finally, we show the effectiveness of self-supervised pre-training for downstream domains that are out-of-distribution as compared to the pre-training domain. We point out and validate several research directions that can further increase the efficacy of latent CL including representation ensembling. The diverse set of datasets used in this study can serve as a compute-efficient playground for further CL research. We will publish the code.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
ostapenko22a
0
Continual Learning with Foundation Models: An Empirical Study of Latent Replay
60
91
60-91
60
false
Ostapenko, Oleksiy and Lesort, Timothee and Rodriguez, Pau and Arefin, Md Rifat and Douillard, Arthur and Rish, Irina and Charlin, Laurent
given family
Oleksiy
Ostapenko
given family
Timothee
Lesort
given family
Pau
Rodriguez
given family
Md Rifat
Arefin
given family
Arthur
Douillard
given family
Irina
Rish
given family
Laurent
Charlin
2022-11-28
Proceedings of The 1st Conference on Lifelong Learning Agents
199
inproceedings
date-parts
2022
11
28