Topological Structure Learning Should Be A Research Priority for LLM-Based Multi-Agent Systems

Sep 18, 2025ยท
Yiqiao Jin
Yiqiao Jin
,
Jiaxi Yang
,
Mengqi Zhang
,
Yiqiao Jin
,
Hao Chen
,
Qingsong Wen
,
Lu Lin
,
Yi He
,
Weijie Xu
,
James Evans
,
Jindong Wang
ยท 1 min read
Image credit: Unsplash
Abstract
Large Language Model-based Multi-Agent Systems (MASs) have emerged as a powerful paradigm for tackling complex tasks through collaborative intelligence. Nevertheless, the question of how agents should be structurally organized for optimal cooperation remains largely unexplored. In this position paper, we aim to gently redirect the focus of the MAS research community toward this critical dimension: develop topology-aware MASs for specific tasks. Specifically, the system consists of three core components - agents, communication links, and communication patterns - that collectively shape its coordination performance and efficiency. To this end, we introduce a systematic, three-stage framework: agent selection, structure profiling, and topology synthesis. Each stage would trigger new research opportunities in areas such as language models, reinforcement learning, graph learning, and generative modeling; together, they could unleash the full potential of MASs in complicated real-world applications. Then, we discuss the potential challenges and opportunities in the evaluation of multiple systems. We hope our perspective and framework can offer critical new insights in the era of agentic AI.
Type
Publication
arXiv
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.

Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.

Yiqiao Jin
Authors
CS PhD Student
My research interests include large language models (LLMs), multi-agent systems (MASs), and multimodal models.