Yiqiao Jin πŸ€
Yiqiao Jin

CS PhD Student

About Me

HelloπŸ‘‹! I am Yiqiao Jin (靳轢乔, Ahren), a CS Ph.D. student at Georgia Institute of Technology, advised by Prof. Srijan Kumar.

I develop efficient multimodal AI systems, such as multi-agent systems (MASs) and multimodal large language models (MLLMs).

My research has led to 20+ top-tier publications, along with recognition including the Microsoft Research “Star of Tomorrow” Award, Roblox Graduate Fellowship Finalist, and UCLA Dean’s Honor List (5 times). My works are featured in leading media outlets such as Scientific American and The World.

During my doctoral studies πŸŽ“, I worked as a research scientist intern at J.P. Morgan AI Research and Adobe Research, and a research collaborator with Visa Research.

Previously, as an undergraduate at UCLA πŸ§‘β€πŸ’», I worked as a research scientist intern at Microsoft Research Asia (MSRA), Social Computing Group, directed by Dr. Xing Xie. I was mentored by Dr. Xiting Wang and Dr. Jindong Wang, and lead research projects on Large Language Models, Explainable NLP, and Large Language Models. I also worked as a research assistant at Scalable Analytics Institute (ScAi) on graph neural networks and recommender systems under the mentorship of Prof. Yizhou Sun and Prof. Wei Wang

Download CV
Interests
  • Large Language Models (LLM)
  • Multimodal AI Models
  • Efficient AI
  • Data Mining
  • Graph Neural Networks
Education
  • Ph.D. in Computer Science

    Georgia Institute of Technology (Georgia Tech)

  • B.S in Computer Science

    University of California, Los Angeles (UCLA)

Experience

  1. Graduate Research Assistant

    Georgia Institute of Technology
    • Research Topics and Publications:
      • Large Language Models (EMNLP'25, ACL'25, Web Conference'24, ACL'24)
      • LLM Robustness and Safety (Under Review at Web Conference'26)
      • Multimodal Models (ACL'25, ACL'24)
      • Recommender Systems and Dynamic Graph Mining (KDD'23)
      • Social Network Analysis (CIKM'24, KDD'23)
      • Fair Graph Mining (CIKM'24).
    • Advisor: Dr. Srijan Kumar
  2. Research Intern

    J.P. Morgan AI Research
    • Research Topics: Multimodal Large Language Models (MLLMs). Infographics Understanding.
  3. Research Collaborator

    Visa Research
    • Research Topics: Retrieval-augmented Generation (RAG). Large Language Models (LLMs). Efficient AI.
  4. Research Intern

    Adobe Inc.
    • Research Topics: Multimodal Large Language Models (MLLMs) Fine-tuning, Web UI and Video Tutorial Understanding.
  5. Research Intern

    Microsoft Research Asia
    • I published over 10 papers at top-tier AI/ML conferences during my worked as a research scientist intern in Microsoft Research Asia (MSRA), Social Computing Group. I continued to collaborate with their team after my internship.
    • Advisors: Dr. Jindong Wang, Dr. Xiting Wang, and Dr. Xing Xie.
    • Research Topics and Publications:
      • Large Language Models (EMNLP'24, ICML'24, ICML'23, AAAI'23)
      • LLM Agents (EMNLP'24, ICML'24)
      • Multicultural and Multimodal LLMs (NeurIPS'25),
      • Scientometric Analysis (Under Review at ACL'25)
      • Computational Social Science
      • Misinformation Detection (KDD'22, AAAI'22)
      • Few-shot Learning (ACL'24, AAAI'23)
      • Explainable AI (AAAI'22).
  6. Undergraduate Research Assistant

    UCLA
    • I worked as an undergraduate research assistant at Scalable Analytics Institute (ScAi), advised by Dr. Yizhou Sun and Dr. Wei Wang. I am continuously collaborating with ScAi Lab on various ongoing research projects.
    • Research Topics and Publications:
      • Large Language Models (EMNLP'25, EMNLP'24)
      • Graph Neural Networks and Data Mining (WWW'23)
      • LLM Fine-tuning (Under Review at KDD'25)
      • Recommender Systems (WWW'23)
  7. Software Engineer Intern

    Amazon
    • Worked in Fulfillment By Amazon (FBA), IAR team
    • Designed and implemented IAR Manual Analysis, a scalable and efficient workflow using AWS Step Functions and AWS Lambda. This service automates the aggregation of data points from multiple sources like Amazon S3 and DynamoDB for SageMaker ML model training, handling over 16,000 requests per summary stage;
    • Automated the deployment of the workflow across all AWS Realms (EU/FE/NA) through CloudFormation;
    • Establish DataCraft pipeline to enable automatic data ingestion from DynamoDB into the Andes dataset catalog, promoting broader internal adoption of these datasets for cross-functional teams and enhancing data accessibility;
    • Perform ablation analysis on the inventory reconciliation model, identifying key performance bottlenecks and optimizing model performance

Education

  1. Ph.D. in Computer Science

    Georgia Institute of Technology (Georgia Tech)
    Research focus on Large Language Models, Multimodal Learning, and Social Computing
  2. B.S in Computer Science

    University of California, Los Angeles (UCLA)
    Graduated with honors
What’s new?
Featured Publications
Recent Publications

Quickly discover relevant content by filtering publications.

(2025). Topological Structure Learning Should Be A Research Priority for LLM-Based Multi-Agent Systems. arXiv.
(2024). MM-SOC: Benchmarking Multimodal Large Language Models in Social Media Platforms. ACL'24.
(2023). Semi-Offline Reinforcement Learning for Optimized Text Generation. ICML'23.
(2023). Prototypical Fine-Tuning: Towards Robust Performance under Varying Data Sizes. AAAI'23.
Recent & Upcoming Talks
Skills
Technical Skills
Large Language Models (LLMs)
Multimodal LLMs
Multi-agent Systems (MASs)
Natural Language Processing
Social Computing
Graph Neural Networks
Hobbies
Rabbit
Photography
Hiking
Contact

πŸ“ Location

Atlanta, GA / Beijing, China

πŸ“§ Email

yjin328[AT]gatech.edu

🏒 Office

756 W Peachtree St NW, Atlanta, GA 30308
CODA 13th Floor

πŸ•’ Office Hours

Monday - Sunday 9:00 to 20:00

🌐 Connect with Me