upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors

  • Paper
  • May 22, 2022
  • #Deeplearning #MachineLearning #ComputerScience
Ravid Shwartz Ziv
@ziv_ravid
(Author)
Andrew Gordon Wilson
@andrewgwils
(Author)
Micah Goldblum
@micahgoldblum
(Author)
Sanyam Kapoor
@snymkpr
(Author)
Hossein Souri
@HosseinSouri8
(Author)
Yann LeCun
@ylecun
(Author)
Chen Zhu
@chenzhucs
(Author)
arxiv.org
Read on arxiv.org
1 Recommender
1 Mention
Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned... Show More

Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task. But an initialization contains relatively little information about the source task. Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss surface on the downstream task. This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks, serving as a drop-in replacement for standard pre-training strategies. These highly informative priors also can be saved for future use, similar to pre-trained weights, and stand in contrast to the zero-mean isotropic uninformative priors that are typically used in Bayesian deep learning.

Show Less
Recommend
Post
Save
Complete
Collect
Mentions
See All
Tim G. J. Rudner (sigmoid.social/@timrudner) @timrudner · Sep 15, 2022
  • Post
  • From Twitter
This is a great paper!
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta