Prototypical Fine-tuning - Towards Robust Performance Under Varying Data Sizes

Our proposed PFit model

Preprint.

Authors

Yiqiao Jin1, Xiting Wang2, Yaru Hao2, Yizhou Sun1, Xing Xie2

1University of California, Los Angeles, 2Microsoft Research Asia

Abstract

We move towards combining large parametric models with non-parametric prototypical networks. We propose prototypical fine-tuning, a novel prototypical framework for fine-tuning pretrained language models (LM), which automatically learns a bias to improve predictive performance for varying data sizes, especially low-resource settings. Our prototypical fine-tuning approach can automatically adjust the model capacity according to the data complexity and the model’s inherent attributes. Moreover, we propose four principles for effective prototype fine-tuning towards the global optimum. Experimental results across various datasets show that our work achieves significant performance improvements under various low-resource settings, as well as comparable and usually better performances in high-resource scenarios.

The template is mobile first with a responsive design to ensure that your site looks stunning on every device.
The template is mobile first with a responsive design to ensure that your site looks stunning on every device.

License

Copyright 2022-present Yiqiao Jin.

Released under the MIT license.

Yiqiao Jin
Yiqiao Jin
Graduate Research Assistant at Georgia Institute of Technology

My research interests include Computational Social Science, Misinformation, Graph Analysis, and Data Mining.