Reinforcement Subgraph Reasoning for Fake News Detection

Abstract

In reinforcement learning (RL), there are two major settings for interacting with the environment — online and offline. Online methods explore the environment at significant time cost, and offline methods efficiently obtain reward signals by sacrificing exploration capability. We propose semi-offline RL, a novel paradigm that smoothly transits from offline to online settings, balances exploration capability and training cost, and provides a theoretical foundation for comparing different RL settings. Based on the semi-offline formulation, we present the RL setting that is optimal in terms of optimization cost, asymptotic error, and overfitting error bound. Extensive experiments show that our semi-offline approach is efficient and yields comparable or often better performance compared with state-of-the-art methods. Our code is available on GitHub

Publication
In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (ICML'23)

Supplementary notes can be added here, including code, math, and images.

Yiqiao Jin
Yiqiao Jin
Graduate Research Assistant at Georgia Institute of Technology

My research interests include Computational Social Science, Misinformation, Graph Analysis, and Data Mining.