Prototypical Fine-tuning: Towards Robust Performance Under Varying Data Sizes

Image credit: Yiqiao Jin

Abstract

In this paper, we move towards combining large parametric models with non-parametric prototypical networks. We propose prototypical fine-tuning, a novel prototypical framework for fine-tuning pretrained language models (LM), which automatically learns a bias to improve predictive performance for varying data sizes, especially low-resource settings. Our prototypical fine-tuning approach can automatically adjust the model capacity according to the data complexity and the model’s inherent attributes. Moreover, we propose four principles for effective prototype fine-tuning towards the global optimum. Experimental results across various datasets show that our work achieves significant performance improvements under various low-resource settings, as well as comparable and usually better performances in high-resource scenarios.

Publication
Under Review

Supplementary notes can be added here, including code, math, and images.

Yiqiao Jin
Yiqiao Jin
Research Intern at Microsoft Research Asia (MSRA)

My research interests include distributed robotics, mobile computing and programmable matter.