Uniform priors for data-efficient transfer
Researchers including MIT Jameel Clinic principal investigator Marzyeh Ghassemi, authored a paper titled 'Uniform priors for data-efficient transfer,' about deep neaural networks and scalability of machine learning models.
From the paper's abstract, authors write, 'Deep neural networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenge. However, the ability to perform few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models.It is therefore crucial to understand what makes for good, transfer-able features in deep networks that best allow for such adaptation. In this paper, we shed light on this by showing that features that are most transferable have high uniformity in the embedding space and propose a uniformity regularization scheme that encourages better transfer and feature reuse. We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data, for which we conduct a thorough experimental study covering four relevant, and distinct domains: few-shot meta learning, deep metric learning, zero-shot domain adaptation, as well as out of-distribution classification. Across all experiments, we show that uniformity regularization consistently offers benefits over baseline methods and is able to achieve state-of-the-art performance in deep metric learning and meta learning.'